活动介绍
file-type

Delphi艺术字控件GR32_Lines和GR32_Text使用教程

ZIP文件

下载需积分: 9 | 943KB | 更新于2025-06-27 | 73 浏览量 | 16 下载量 举报 收藏
download 立即下载
标题“gr32_lines”指的是一个在Delphi编程语言中使用的图形处理控件库。Delphi是一种面向对象的编程语言,主要被用于快速应用开发,特别是Windows平台的应用程序。控件库“gr32_lines”特别擅长于艺术字的创作和图形绘制,它提供了丰富的接口来设计和显示图形线条。艺术字,即使用图形的方式表现的文字,通常用于美化界面、吸引用户注意、增加视觉效果等。利用这个控件,开发者可以轻松地在应用程序中实现复杂的视觉效果。 从描述中可以提炼出以下知识点: 1. Delphi语言是一种强类型、编译型的面向对象编程语言,由Embarcadero公司开发,它主要基于Object Pascal语言,并拥有自己的特色。 2. Delphi控件是可以被拖放到Delphi的Form中的用户界面元素。控件可以是简单的按钮或列表框,也可以是复杂的用户自定义组件。 3. gr32_lines是一个专门为Delphi开发的图形控件库,这个库提供了绘制高质量线条和文字的功能,特别适用于图形界面中创建艺术字效果。 4. 创建艺术字效果通常需要对图形学有一定的了解,比如线条绘制、颜色填充、字体渲染等。 5. gr32_lines控件可能提供了一些特定的属性和方法,例如对线条宽度、颜色、样式以及文字的字体、大小、位置等进行设置。 压缩包子文件名称列表中提到了以下几个文件: 1. GR32_Lines.pas:这是包含gr32_lines控件主要代码的Pascal源代码文件。Pascal是一种广泛用于教学和商业应用的高级编程语言。 2. GR32_Text.pas:该文件可能包含与gr32_lines控件中文字处理相关的代码。 3. gr32textsample2.png、gr32textsample.png:这两个文件是图像文件,很可能作为示例来展示gr32_lines控件如何处理和显示文字。 4. gr32linessample.png:此图像文件可能用于展示gr32_lines控件在绘制线条方面的效果和能力。 5. GR32_Lines_ChangeLog.txt、GR32_Text_ChangeLog.txt:这两个文本文件记录了控件库版本更新的变化日志,对于开发者来说,这些更新日志非常重要,可以了解控件库自上次版本以来新增了哪些特性、修复了哪些问题、进行了哪些改进。 6. GR32TextSample.zip、GR32LinesSample.zip:这两个压缩包文件可能是包含示例项目的压缩包,开发者可以下载解压后,查看、学习和测试gr32_lines控件的实际应用。 综合以上信息,我们可以得知gr32_lines是一个针对Delphi环境开发的第三方控件库,专门用于图形界面中的线条和文字绘制,特别适合于进行艺术字效果的创作。开发者在使用这个控件库时,可以通过查看其提供的Pascal源代码文件来了解其内部实现机制,通过查看变更日志来跟踪版本更新和功能改进,同时利用示例图像和项目来直观地学习如何将控件应用到实际的图形设计中。

相关推荐

filetype

import gradio as gr from openai import OpenAI from utils.utils import send_qwenvl, mathml2latex client = OpenAI( api_key="sk-86ec70f3845c46dd937f9827f9572b81", base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", ) # Send Qwen2.5-72B-vl def submit_qwenvl(stem, analysis, score, student_answer, model): stem = mathml2latex(stem) analysis = mathml2latex(analysis) scoring = send_qwenvl(client, analysis, score, student_answer, model, stem) # Determine word problem return [stem, analysis, scoring] # Clean up input and output def clean(): return [None, None, None, None, None, 'qwen2.5-vl-72b-instruct'] type_chioes = ['llm', '多模态'] def update_dropdown(choice): if choice == 'llm': return [ ('72b', 'qwen2.5-72b-instruct'), ('32b', 'qwen2.5-32b-instruct'), ('14b', 'qwen2.5-14b-instruct'), ('7b', 'qwen2.5-7b-instruct'), ('3b', 'qwen2.5-3b-instruct'), ('1.5b', 'qwen2.5-1.5b-instruct'), ('0.5b', 'qwen2.5-0.5b-instruct') ] else: return [ ('72b', 'qwen2.5-vl-72b-instruct'), ('32b', 'qwen2.5-vl-32b-instruct'), ('7b', 'qwen2.5-vl-7b-instruct'), ('3b', 'qwen2.5-vl-3b-instruct') ] with gr.Blocks(title="测学练") as demo: gr.Markdown("

测学练

") with gr.Row(): # input with gr.Column(): with gr.Row(): type_choice = gr.Dropdown(label='类型', choices=type_chioes) model_choice = gr.Dropdown(label='模型') stem_input = gr.Textbox(label="题干", lines=5) analysis_input = gr.Textbox(label="标准答案", lines=5) score = gr.Slider(label="分值", minimum=1, maximum=50, step=1) student_answer = gr.Textbox(label="学生作答", lines=5) with gr.Row(): submit_btn = gr.Button(value="提交") clean_btn = gr.Button(value="清除") # output with gr.Column(): stem_output = gr.Textbox(label="题干", lines=5) analysis_output = gr.Textbox(label="标准答案", lines=5) scoring_output = gr.Text(label="评分结果") gr.on(triggers=[type_choice.change], fn=update_dropdown, inputs=type_choice, outputs=model_choice) submit_btn.click(fn=submit_qwenvl, inputs=[stem_input, analysis_input, score, student_answer, model_choice], outputs=[stem_output, analysis_output, scoring_output]) clean_btn.click(fn=clean, inputs=None, outputs=[stem_input, analysis_input, scoring_output, score, student_answer, model_choice]) demo.launch( server_name="0.0.0.0", server_port=7860, share=False )

filetype

import os import gradio as gr import pandas as pd import time from pathlib import Path from datetime import datetime, date ,timedelta import tempfile import shutil from concurrent.futures import ThreadPoolExecutor, as_completed import threading import json from urllib.parse import quote from config import available_models, default_model, degrees, default_degree, GENDER, DEFAULT_GENDER, api_keys, IMAP_HOST, PORT from extract_utils import extract_resume_info, read_html_content from extract_foxmail import EmailResumeDownloader JOB_JSON_PATH = "job_descriptions.json" def update_job_description(selected_job_name): try: with open(JOB_JSON_PATH, "r", encoding="utf-8") as f: job_descriptions_latest = json.load(f) return job_descriptions_latest.get(selected_job_name, "") except Exception as e: print(f"读取岗位描述失败: {e}") return "" def download_resumes_from_mail(start_date_str=None, end_date_str=None): downloader = EmailResumeDownloader( host=IMAP_HOST, port=PORT, user=api_keys["email_user"], password=api_keys["email_pass"] ) downloader.process_emails(since_date=start_date_str, before_date=end_date_str) def process_single_resume(model_name, selected_job, job_description_input, city, file): suffix = Path(file.name).suffix.lower() content = "" temp_path = f"tmp_{threading.get_ident()}{suffix}" shutil.copy(file.name, temp_path) today_date = datetime.today().strftime("%Y-%m-%d") output_folder = os.path.join(os.path.expanduser("~"), 'Desktop', 'processed_resumes', today_date, selected_job) file_path = os.path.join(output_folder, file.name) try: if suffix == ".html": content = read_html_content(temp_path) else: return None if not content.strip(): return None if city: city = f"是否有意愿来{city}发展" job_description_input += city info = extract_resume_info(content, model_name, selected_job, job_description_input) # info["文件名"] = Path(file.name).name info["文件路径"] = file.name if not len(job_description_input): info["辅助匹配"] = 1 print(info) print("="*100) return info finally: if os.path.exists(temp_path): try: os.remove(temp_path) except Exception as e: print(f"删除临时文件 {temp_path} 失败: {e}") def dataframe_to_html_with_links(df: pd.DataFrame) -> str: df_copy = df.copy() if "文件地址" in df_copy.columns: df_copy["文件名"] = df_copy["文件地址"] df_copy.drop(columns=["文件地址"], inplace=True, errors="ignore") return df_copy.to_html(escape=False, index=False) def save_csv_to_folder(df, folder_name, save_dir): if df.empty: return None os.makedirs(save_dir, exist_ok=True) save_path = os.path.join(save_dir, f"{folder_name}.csv") with open(save_path, mode='w', encoding='utf-8-sig', newline='') as f: df.to_csv(f, index=False) temp_download_path = os.path.join(tempfile.gettempdir(), f"{folder_name}.csv") shutil.copy(save_path, temp_download_path) return temp_download_path def process_resumes_mult(model_name, selected_job, degree, job_description_input, work_experience, files, resume_limit, gender, age_min, age_max, city): start_time = time.time() degree_levels = {"大专": 1, "本科": 2, "硕士": 3, "博士": 4, "不限": 0} results, pdf_docx_files, doc_files = [], [], [] today_date = datetime.today().strftime("%Y-%m-%d") output_folder = os.path.join(os.path.expanduser("~"), 'Desktop', 'processed_resumes', today_date, selected_job) os.makedirs(output_folder, exist_ok=True) with ThreadPoolExecutor(max_workers=4) as executor: futures = [ executor.submit(process_single_resume, model_name, selected_job, job_description_input, city, file) for file in files ] for future in as_completed(futures): try: res = future.result() if res: results.append(res) except Exception as e: print(f"简历处理异常: {e}") df_filtered = pd.DataFrame(results) if not df_filtered.empty: if gender != "不限": df_filtered = df_filtered[df_filtered["性别"] == gender] # 年龄筛选:必须先确保有年龄字段 if "年龄" in df_filtered.columns: df_filtered = df_filtered[ (df_filtered["年龄"] >= age_min) & (df_filtered["年龄"] <= age_max) ] df_filtered = df_filtered[ (df_filtered["工作经验"] >= work_experience) & (df_filtered["岗位匹配度"] > 0.5) & (df_filtered["辅助匹配"] > 0.5) ] if degree != "其他": df_filtered = df_filtered[ df_filtered["学历"].map(lambda x: degree_levels.get(x, 0)) >= degree_levels[degree] ] # 合并岗位匹配度和辅助匹配,生成综合匹配得分(范围0~1) df_filtered["综合匹配得分"] = ( df_filtered["岗位匹配度"] / 2 + df_filtered["辅助匹配"] / 2 ).round(2) df_filtered = df_filtered.drop(columns=["岗位匹配度", "辅助匹配"]) df_filtered = df_filtered.sort_values(by="综合匹配得分", ascending=False) if resume_limit > 0: df_filtered = df_filtered.head(resume_limit) file_paths = df_filtered.get("文件路径") file_links = [] for file_path in file_paths: file_path = Path(file_path) file_name = file_path.name target_path = os.path.join(output_folder, file_name) file_path_str = str(file_path).replace("\\", "/") # 复制文件到输出文件夹 if file_path and os.path.exists(file_path): shutil.copy(file_path, target_path) file_links.append(file_path_str) df_filtered["文件地址"] = file_links if "文件路径" in df_filtered.columns: df_filtered = df_filtered.drop(columns=["文件路径"]) elapsed_time = f"{time.time() - start_time:.2f} 秒" return df_filtered, elapsed_time, output_folder def on_import_and_process(model_name, selected_job, degree, job_description_input, work_experience, resume_limit, gender, age_min, age_max, city): desktop = os.path.join(os.path.expanduser("~"), 'Desktop') base_dir = os.path.join(desktop, 'resumes') start_date_val = datetime.today().strftime("%Y-%m-%d") resume_folder = os.path.join(base_dir, start_date_val) file_paths = [] for suffix in [".pdf", ".doc", ".docx", ".html"]: file_paths.extend(Path(resume_folder).rglob(f"*{suffix}")) class UploadedFile: def __init__(self, path): self.name = str(path) files = [UploadedFile(path) for path in file_paths] df_filtered, elapsed_time, output_folder = process_resumes_mult( model_name, selected_job, degree, job_description_input, work_experience, files, resume_limit, gender, age_min, age_max, city ) export_button.interactive = not df_filtered.empty df_html = dataframe_to_html_with_links(df_filtered) return df_html, elapsed_time, df_filtered, output_folder def add_new_job(job_name, job_description): job_name = job_name.strip() job_description = job_description.strip() if not job_name: return "⚠️ 岗位名称不能为空" if not job_description: return "⚠️ 岗位描述不能为空" # 读取原始文件 try: with open("job_descriptions.json", "r", encoding="utf-8") as f: jobs = json.load(f) except Exception as e: return f"❌ 加载 job_descriptions.json 失败: {e}" # 如果岗位已存在 if job_name in jobs: return f"⚠️ 岗位【{job_name}】已存在,请勿重复添加" # 添加岗位 jobs[job_name] = job_description try: with open("job_descriptions.json", "w", encoding="utf-8") as f: json.dump(jobs, f, ensure_ascii=False, indent=2) except Exception as e: return f"❌ 保存失败: {e}" return gr.update(choices=list(jobs.keys())), f"✅ 成功添加岗位【{job_name}】..." def load_job_descriptions(): try: with open(JOB_JSON_PATH, "r", encoding="utf-8") as f: return json.load(f) except: return {} with gr.Blocks(title="📄 智能简历抽取 Test 版") as demo: gr.Markdown("# 📄 智能简历信息抽取") with gr.Row(): model_name = gr.Dropdown(choices=available_models, value=default_model, label="选择语言模型") degree = gr.Dropdown(choices=degrees, value=default_degree, label='学历') job_descriptions = load_job_descriptions() selected_job = gr.Dropdown(choices=list(job_descriptions.keys()), label="岗位") work_experience = gr.Slider(0, 10, value=0, step=1, label="工作经验(年数)") resume_limit = gr.Dropdown(choices=[0, 5, 10, 15, 20], value=0, label="筛选简历(0 不限制)") # 在原 Gradio UI 中添加年龄筛选区间组件: with gr.Row(): gender = gr.Dropdown(choices=GENDER, value=DEFAULT_GENDER, label='性别') city = gr.Textbox(label="城市", placeholder="请输入招聘城市名称,如 徐州") age_min = gr.Slider(18, 65, value=0, step=1, label="年龄下限") age_max = gr.Slider(18, 65, value=100, step=1, label="年龄上限") # city = gr.Textbox(label="城市", placeholder="请输入招聘城市名称,如 徐州") with gr.Accordion("➕ 添加新岗位", open=False): new_job_name = gr.Textbox(label="新岗位名称", placeholder="请输入岗位名称,如 销售经理") new_job_description = gr.Textbox( label="新岗位描述", lines=6, placeholder="请输入该岗位的要求、职责描述等,可用于简历辅助匹配" ) add_job_button = gr.Button("✅ 确认添加") add_job_output = gr.Markdown("") job_description_populate = gr.Textbox(label="岗位描述(可加入更多筛选需求)", placeholder="请输入岗位职责或要求,可用于辅助匹配", lines=3) add_job_button.click( fn=add_new_job, inputs=[new_job_name, new_job_description], outputs=[selected_job, add_job_output] ) today_str = str(date.today()) with gr.Row(): date_range = gr.Radio( choices=["今天", "最近三天", "最近一周", "最近一个月", "自定义时间段"], value="今天", label="筛选邮件时间范围" ) read_status = gr.Radio( choices=["全部", "未读", "已读"], value="全部", label="邮件读取状态" ) with gr.Row(visible=False) as custom_date_row: start_date = gr.Textbox(value=today_str, label="起始日期(格式:2025-07-16)") end_date = gr.Textbox(value=today_str, label="结束日期(格式:2025-07-16)") def toggle_date_inputs(date_range_value): return gr.update(visible=(date_range_value == "自定义时间段")) date_range.change(toggle_date_inputs, inputs=date_range, outputs=custom_date_row) with gr.Row(): import_button = gr.Button("📂 下载简历") process_button = gr.Button("🔍 开始处理") export_button = gr.Button("📥 导出筛选结果", interactive=True) download_notice = gr.Markdown(value="") # result_table = gr.Dataframe(label="筛选结果", interactive=False) result_table = gr.HTML(label="筛选结果") elapsed_time_display = gr.Textbox(label="耗时", interactive=False) output_folder_state = gr.State() result_state = gr.State() # 选岗位时更新岗位描述 def update_job_description(selected_job_name): job_descriptions = load_job_descriptions() if not selected_job_name or selected_job_name not in job_descriptions: return "" job_descriptions = load_job_descriptions() return job_descriptions[selected_job_name] selected_job.change( fn=update_job_description, inputs=[selected_job], outputs=[job_description_populate] ) def on_download_and_import(model_name, selected_job, degree, job_description_input, work_experience, resume_limit, gender, age_min, age_max, city): return on_import_and_process(model_name, selected_job, degree, job_description_input, work_experience, resume_limit, gender, age_min, age_max, city) def show_downloading_text(): return "⏳ 开始下载中..." def on_download_email(date_range_value, start_date_val, end_date_val, read_status_val): today = datetime.today().date() if date_range_value == "今天": start = today end = today elif date_range_value == "最近三天": start = today - timedelta(days=2) end = today elif date_range_value == "最近一周": start = today - timedelta(days=6) end = today elif date_range_value == "最近一个月": start = today - timedelta(days=29) end = today elif date_range_value == "自定义时间段": try: start = datetime.strptime(start_date_val, "%Y-%m-%d").date() end = datetime.strptime(end_date_val, "%Y-%m-%d").date() except Exception: return "⚠️ 自定义时间格式错误,请使用 YYYY-MM-DD" else: return "⚠️ 未知时间范围选项" # 邮件读取状态控制 unseen_only = None if read_status_val == "未读": unseen_only = True elif read_status_val == "已读": unseen_only = False downloader = EmailResumeDownloader( host=IMAP_HOST, port=PORT, user=api_keys["email_user"], password=api_keys["email_pass"] ) downloader.process_emails( since_date=start.strftime("%Y-%m-%d"), before_date=(end + timedelta(days=1)).strftime("%Y-%m-%d"), # 邮件before_date是“非包含” unseen_only=unseen_only ) return f"📥 已下载 {start} 至 {end} 区间、状态为 [{read_status_val}] 的简历 ✅" import_button.click( fn=show_downloading_text, outputs=[download_notice] ).then( fn=on_download_email, inputs=[date_range, start_date, end_date, read_status], outputs=[download_notice] ) process_button.click( fn=on_download_and_import, inputs=[model_name, selected_job, degree, job_description_populate, work_experience, resume_limit, gender, age_min, age_max, city], outputs=[result_table, elapsed_time_display, result_state, output_folder_state] ) def export_csv(df, selected_job, output_folder): return save_csv_to_folder(df, selected_job, output_folder) export_button.click( fn=export_csv, inputs=[result_state, selected_job, output_folder_state], outputs=gr.File(label="下载 CSV") ) if __name__ == "__main__": demo.launch(server_name="0.0.0.0", share=True, debug=True, allowed_paths=[os.path.join(os.path.expanduser("~"), 'Desktop')])如何在result_table = gr.HTML(label="筛选结果")每行后面添加一个按钮,按钮使用gradio库进行添加,然后绑定这一行文件位置这一数值进行触发

filetype

import pdfplumber import pandas as pd import os import re def extract_materials_from_pdf(pdf_path, filename): """ 从PDF文件中提取材料信息 - 专门针对ISO图纸格式 """ materials = [] try: with pdfplumber.open(pdf_path) as pdf: for page in pdf.pages: text = page.extract_text() if text: materials.extend(parse_iso_materials(text, filename)) return materials except Exception as e: print(f"处理文件 {filename} 时出错: {e}") return [] def parse_iso_materials(text, filename): """ 专门解析ISO图纸的材料表格 """ materials = [] lines = text.split('\n') current_category = "" pipeline_number = get_pipeline_number(text) # 查找材料表格的开始 table_start = -1 for i, line in enumerate(lines): if re.search(r'PT\s*NO|COMPONENT\s*DESCRIPTION|N\.S\.\(MM\)|IDENT\s*CODE|QTY', line, re.IGNORECASE): table_start = i break if table_start == -1: print(f" 在文件 {filename} 中未找到材料表格") return materials print(f" 找到材料表格,开始位置: {table_start}") # 从表格开始位置解析 i = table_start while i < len(lines): line = lines[i].strip() # 检测类别行 if is_category_line(line): current_category = get_category(line) print(f" 发现类别: {current_category}") i += 1 continue # 跳过空行 if not line: i += 1 continue # 检查是否是切割长度表或其他非材料行 if re.search(r'CUT LENGTH TABLE|SPOOL N|ɍɁɅȺ|LENGTH\(MM\)|ȾɅɂɇȺ\(MM\)', line, re.IGNORECASE): print(f" 遇到切割长度表,停止解析") break # 检查是否是图纸信息行 if re.search(r'ISSUED FOR|N-PLANT NORTH|E \d+|N \d+|EL\. \+\d+', line, re.IGNORECASE): i += 1 continue # 尝试解析材料行 material = parse_material_line(lines, i, current_category) if material and material['材料代码'] and material['材料介绍']: material['单线图文件名'] = filename material['管线号'] = pipeline_number materials.append(material) print(f" 提取材料: {material['材料代码']} - 规格: {material['材料规格']} - 数量: {material['数量']}") # 跳过已处理的行 i += material.get('lines_processed', 1) else: i += 1 # 如果遇到明显的表格结束标记,提前退出 if i < len(lines) and re.search(r'CUT LENGTH TABLE|CONT\. ON|CONT\. FROM', lines[i], re.IGNORECASE): print(f" 遇到表格结束标记,停止解析") break return materials def parse_material_line(lines, start_index, category): """ 解析材料行,特别处理管道支架和螺栓 """ line = lines[start_index].strip() # 检查是否是以数字开头的行(材料行) if not re.match(r'^\d+\s', line): return None # 提取项目编号 item_no_match = re.match(r'^(\d+)\s+', line) if not item_no_match: return None item_no = item_no_match.group(1) remaining_line = line[len(item_no):].strip() # 特殊处理管道支架 if category == "PIPE SUPPORTS": return parse_pipe_support_line(lines, start_index, category) # 特殊处理螺栓 if category == "BOLTS": return parse_bolt_line(lines, start_index, category) # 特殊处理垫片(可能有多行描述) if category == "GASKETS": return parse_gasket_line(lines, start_index, category) # 常规材料解析 # 尝试提取数量(通常是行尾的数字,可能带有M) qty_match = re.search(r'(\d+\.?\d*\s*M?)$', remaining_line) if not qty_match: # 尝试从下一行获取数量 if start_index + 1 < len(lines): next_line = lines[start_index + 1].strip() qty_match = re.search(r'(\d+\.?\d*\s*M?)$', next_line) if not qty_match: return None quantity = qty_match.group(1) # 从行中移除数量部分 remaining_line = re.sub(r'\s*' + re.escape(quantity) + r'$', '', remaining_line) # 尝试提取材料代码(通常是大写字母、数字、下划线和连字符的组合) code_match = re.search(r'([A-Z0-9][A-Z0-9_\-]{5,})\s*$', remaining_line) if not code_match: # 尝试从下一行获取代码 if start_index + 1 < len(lines): next_line = lines[start_index + 1].strip() code_match = re.search(r'([A-Z0-9][A-Z0-9_\-]{5,})\s*$', next_line) if not code_match: # 对于没有明确代码的材料,检查是否有有效的描述 if not re.search(r'[A-Za-z]', remaining_line): return None # 使用描述的一部分作为代码 ident_code = f"MAT-{item_no}-{re.sub(r'[^A-Z0-9]', '', remaining_line.upper())[:10]}" else: ident_code = code_match.group(1) # 从行中移除代码部分 remaining_line = re.sub(r'\s*' + re.escape(ident_code) + r'\s*$', '', remaining_line) # 尝试提取规格(通常是数字或数字x数字格式) size_match = re.search(r'(\d+(?:\s?[x×]\s?\d+)?)\s*(?:MM|mm|")?$', remaining_line) if size_match: size = size_match.group(1) # 从行中移除规格部分 remaining_line = re.sub(r'\s*' + re.escape(size) + r'\s*(?:MM|mm|")?\s*$', '', remaining_line) else: size = "" # 剩余部分就是描述 description = remaining_line.strip() # 检查描述是否有效 if not re.search(r'[A-Za-z]', description): return None # 清理描述中的多余空格 description = re.sub(r'\s+', ' ', description).strip() return { '材料代码': ident_code, '材料规格': size, '材料介绍': f"{category}: {description}" if category else description, '数量': quantity, 'lines_processed': 1 } def parse_pipe_support_line(lines, start_index, category): """ 专门解析管道支架行 """ # 管道支架的格式通常是: 项目编号 描述 规格 数量 # 例如: "6 J(S1)-8"-CS1-100 200 4" line = lines[start_index].strip() # 提取项目编号 item_no_match = re.match(r'^(\d+)\s+', line) if not item_no_match: return None item_no = item_no_match.group(1) remaining_line = line[len(item_no):].strip() # 尝试提取数量(通常是行尾的数字) qty_match = re.search(r'(\d+)$', remaining_line) if not qty_match: return None quantity = qty_match.group(1) # 从行中移除数量部分 remaining_line = re.sub(r'\s*' + re.escape(quantity) + r'$', '', remaining_line) # 尝试提取规格(通常是数字) size_match = re.search(r'(\d+)\s*(?:MM|mm|")?$', remaining_line) if size_match: size = size_match.group(1) # 从行中移除规格部分 remaining_line = re.sub(r'\s*' + re.escape(size) + r'\s*(?:MM|mm|")?\s*$', '', remaining_line) else: size = "" # 剩余部分就是描述 description = remaining_line.strip() # 检查描述是否有效 if not re.search(r'[A-Za-z]', description): return None # 生成一个代码(使用描述的前几个字符和项目编号) clean_desc = re.sub(r'[^A-Z0-9]', '', description.upper()) ident_code = f"PS-{item_no}-{clean_desc[:10]}" if clean_desc else f"PS-{item_no}" return { '材料代码': ident_code, '材料规格': size, '材料介绍': f"{category}: {description}", '数量': quantity, 'lines_processed': 1 } def parse_bolt_line(lines, start_index, category): """ 专门解析螺栓行 """ # 螺栓的格式通常是多行的,例如: # "15 95 mm BOLTS/NUTS,-,A320 Gr.L7/A194 GR.7 S3,FT" # "S.BOLT/2 HHN,ASME B18.31.2" # "5/8 PLLS60NZZ-95 4" # 首先尝试提取项目编号 line = lines[start_index].strip() item_no_match = re.match(r'^(\d+)\s+', line) if not item_no_match: return None item_no = item_no_match.group(1) remaining_line = line[len(item_no):].strip() # 合并多行描述 description_lines = [remaining_line] lines_processed = 1 # 检查下一行是否也是描述的一部分 while start_index + lines_processed < len(lines): next_line = lines[start_index + lines_processed].strip() # 如果下一行以数字开头(可能是下一个项目),则停止 if re.match(r'^\d+\s', next_line): break # 如果下一行包含明显的数量或代码模式,则停止 if re.search(r'\d+$|PLLS\d+', next_line): break description_lines.append(next_line) lines_processed += 1 # 现在处理包含数量、规格和代码的行 if start_index + lines_processed >= len(lines): return None data_line = lines[start_index + lines_processed].strip() lines_processed += 1 # 尝试提取数量 qty_match = re.search(r'(\d+)$', data_line) if not qty_match: return None quantity = qty_match.group(1) # 从行中移除数量部分 data_line = re.sub(r'\s*' + re.escape(quantity) + r'$', '', data_line) # 尝试提取代码 code_match = re.search(r'([A-Z0-9][A-Z0-9_\-]{5,})\s*$', data_line) if not code_match: # 如果没有明确代码,生成一个 ident_code = f"BOLT-{item_no}" else: ident_code = code_match.group(1) # 从行中移除代码部分 data_line = re.sub(r'\s*' + re.escape(ident_code) + r'\s*$', '', data_line) # 剩余部分可能是规格 size = data_line.strip() # 合并描述 description = ' '.join(description_lines).strip() # 检查描述是否有效 if not re.search(r'[A-Za-z]', description): return None return { '材料代码': ident_code, '材料规格': size, '材料介绍': f"{category}: {description}", '数量': quantity, 'lines_processed': lines_processed } def parse_gasket_line(lines, start_index, category): """ 专门解析垫片行(可能有多行描述) """ line = lines[start_index].strip() # 提取项目编号 item_no_match = re.match(r'^(\d+)\s+', line) if not item_no_match: return None item_no = item_no_match.group(1) remaining_line = line[len(item_no):].strip() # 尝试提取数量 qty_match = re.search(r'(\d+)$', remaining_line) if not qty_match: return None quantity = qty_match.group(1) # 从行中移除数量部分 remaining_line = re.sub(r'\s*' + re.escape(quantity) + r'$', '', remaining_line) # 尝试提取代码 code_match = re.search(r'([A-Z0-9][A-Z0-9_\-]{5,})\s*$', remaining_line) if not code_match: return None ident_code = code_match.group(1) # 从行中移除代码部分 remaining_line = re.sub(r'\s*' + re.escape(ident_code) + r'\s*$', '', remaining_line) # 尝试提取规格 size_match = re.search(r'(\d+)\s*(?:MM|mm|")?$', remaining_line) if size_match: size = size_match.group(1) # 从行中移除规格部分 remaining_line = re.sub(r'\s*' + re.escape(size) + r'\s*(?:MM|mm|")?\s*$', '', remaining_line) else: size = "" # 剩余部分就是描述 description = remaining_line.strip() # 检查下一行是否也是描述的一部分 if start_index + 1 < len(lines): next_line = lines[start_index + 1].strip() if not re.match(r'^\d+\s', next_line) and not is_category_line(next_line): description += " " + next_line # 检查描述是否有效 if not re.search(r'[A-Za-z]', description): return None return { '材料代码': ident_code, '材料规格': size, '材料介绍': f"{category}: {description}", '数量': quantity, 'lines_processed': 1 } def is_category_line(line): """ 判断是否是类别行 """ categories = ['PIPES', 'FITTINGS', 'FLANGES', 'GASKETS', 'BOLTS', 'VALVES', 'INSTRUMENTS', 'PIPE SUPPORTS', 'VALVES / IN-LINE ITEMS'] return any(cat in line.upper() for cat in categories) def get_category(line): """ 从行中提取类别 """ categories = ['PIPES', 'FITTINGS', 'FLANGES', 'GASKETS', 'BOLTS', 'VALVES', 'INSTRUMENTS', 'PIPE SUPPORTS', 'VALVES / IN-LINE ITEMS'] for cat in categories: if cat in line.upper(): return cat return "" def get_pipeline_number(text): """ 从文本中提取管线号 """ # 首先尝试查找明确的管线号模式 patterns = [ r'9170-\d{4}-[A-Z0-9]+-[A-Z]{2,4}', r'9170-\d{4}-[A-Z0-9]+', r'CA911[EF]', r'91-E-\d+-T\d+', ] for pattern in patterns: matches = re.findall(pattern, text) if matches: # 返回最长的匹配项(通常更完整) return max(matches, key=len) # 如果找不到明确模式,尝试从文件名提取 filename_match = re.search(r'9170[^\s]*-(?:PRP|LN)', text) if filename_match: return filename_match.group(0) return "未识别" def batch_process_pdfs(folder_path): """ 批量处理文件夹中的所有PDF文件 """ all_materials = [] pdf_files = [f for f in os.listdir(folder_path) if f.lower().endswith('.pdf')] if not pdf_files: print("文件夹中没有找到PDF文件") return None print(f"找到 {len(pdf_files)} 个PDF文件,开始处理...") for pdf_file in pdf_files: pdf_path = os.path.join(folder_path, pdf_file) print(f"\n处理文件: {pdf_file}") materials = extract_materials_from_pdf(pdf_path, pdf_file) if materials: print(f" 提取到 {len(materials)} 条材料记录") for j, material in enumerate(materials[:min(3, len(materials))]): print(f" {j+1}. {material['材料代码']}: {material['材料介绍']} - {material['数量']}") else: print(f" 未提取到材料记录") all_materials.extend(materials) return all_materials def create_excel(materials_data, output_path): """ 创建Excel文件 """ if not materials_data: print("没有提取到材料数据") return False df = pd.DataFrame(materials_data) # 重新排列列顺序 columns = ['单线图文件名', '材料代码', '材料规格', '材料介绍', '数量', '管线号'] df = df[columns] # 去重(保留第一条记录) df = df.drop_duplicates(subset=['单线图文件名', '材料代码', '数量'], keep='first') # 确保所有材料介绍都有类别前缀 def ensure_category_prefix(description): if not re.match(r'^[A-Z]+:', description): # 如果没有类别前缀,尝试从描述中推断类别 if 'PIPE' in description.upper() and 'SUPPORT' not in description.upper(): return f"PIPES: {description}" elif 'SUPPORT' in description.upper() or description.startswith('J('): return f"PIPE SUPPORTS: {description}" elif 'FITTING' in description.upper() or 'ELBOW' in description.upper() or 'TEE' in description.upper(): return f"FITTINGS: {description}" elif 'FLANGE' in description.upper(): return f"FLANGES: {description}" elif 'GASKET' in description.upper(): return f"GASKETS: {description}" elif 'BOLT' in description.upper() or 'NUT' in description.upper(): return f"BOLTS: {description}" elif 'VALVE' in description.upper(): return f"VALVES: {description}" elif 'INSTRUMENT' in description.upper(): return f"INSTRUMENTS: {description}" else: return description return description df['材料介绍'] = df['材料介绍'].apply(ensure_category_prefix) df.to_excel(output_path, index=False, engine='openpyxl') return True def main(): # 设置文件夹路径 pdf_folder = r"C:\Users\10196\Desktop\PDF文件夹" output_excel = r"C:\Users\10196\Desktop\材料清单汇总_精确版.xlsx" print("开始处理PDF文件...") materials_data = batch_process_pdfs(pdf_folder) if materials_data: if create_excel(materials_data, output_excel): print(f"\n✅ Excel文件已生成: {output_excel}") print(f"✅ 共提取 {len(materials_data)} 条材料记录") # 显示统计信息 files = set(m['单线图文件名'] for m in materials_data) print(f"✅ 处理了 {len(files)} 个文件") # 显示材料类别统计 categories = {} for m in materials_data: cat = m['材料介绍'].split(':')[0] if ':' in m['材料介绍'] else '其他' categories[cat] = categories.get(cat, 0) + 1 print("✅ 材料类别统计:") for cat, count in categories.items(): print(f" {cat}: {count}条") else: print("❌ 生成Excel文件失败") else: print("❌ 没有提取到任何材料数据") if __name__ == "__main__": main()这个代码提取pdf里面的支架资料老是提取不全该怎么办

filetype

#将下面的程序由网页版改为程序版 import gradio as gr import ollama import cv2 import numpy as np import base64 import tempfile import time from PyPDF2 import PdfReader import os import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties from PIL import Image import speech_recognition as sr from pydub import AudioSegment # 设置中文字体支持 plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签 plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号 # 模型配置参数 MODEL_CONFIG = { "model": "gemma3:27b", "temperature": 0.7, "max_tokens": 1024, "top_p": 0.9, "system_prompt": "你是一个多模态AI助手,请用中文回答" } # 系统自检函数 def system_self_check(): """执行系统自检,确保所有组件正常工作""" results = [] # 检查Ollama连接 try: ollama.list() results.append(("Ollama连接", "✅ 正常", "green")) except Exception as e: results.append(("Ollama连接", f"❌ 失败: {str(e)}", "red")) # 检查模型可用性 try: models = [m['name'] for m in ollama.list()['models']] if MODEL_CONFIG['model'] in models: results.append(("模型可用性", f"✅ {MODEL_CONFIG['model']}可用", "green")) else: results.append(("模型可用性", f"❌ {MODEL_CONFIG['model']}不可用", "red")) except Exception as e: results.append(("模型可用性", f"❌ 检查失败: {str(e)}", "red")) # 检查OpenCV try: cv2.__version__ results.append(("OpenCV", f"✅ 版本 {cv2.__version__}", "green")) except Exception as e: results.append(("OpenCV", f"❌ 未安装: {str(e)}", "red")) # 检查PyPDF2 try: PdfReader results.append(("PDF处理", "✅ PyPDF2可用", "green")) except Exception as e: results.append(("PDF处理", f"❌ PyPDF2未安装: {str(e)}", "red")) return results # ========== 多模态处理函数 ========== def process_text(prompt, temperature, max_tokens): """处理文本输入""" response = ollama.chat( model=MODEL_CONFIG['model'], messages=[{'role': 'user', 'content': prompt}], options={ 'temperature': temperature, 'num_predict': max_tokens } ) return response['message']['content'] def process_image(image, prompt, temperature): """处理图像输入""" # 将图像转换为base64 if isinstance(image, str): img = Image.open(image) else: img = Image.fromarray(image.astype('uint8'), 'RGB') buffered = tempfile.NamedTemporaryFile(suffix='.jpg', delete=False) img.save(buffered, format="JPEG") img_base64 = base64.b64encode(buffered.read()).decode('utf-8') os.unlink(buffered.name) # 创建多模态消息 messages = [ { 'role': 'user', 'content': prompt, 'images': [img_base64] } ] response = ollama.chat( model=MODEL_CONFIG['model'], messages=messages, options={'temperature': temperature} ) # 创建可视化 fig, ax = plt.subplots(figsize=(8, 6)) ax.imshow(img) ax.set_title("分析图像") ax.axis('off') # 添加响应文本 plt.figtext(0.5, 0.01, response['message']['content'], ha="center", fontsize=10, bbox={"facecolor": "orange", "alpha": 0.2, "pad": 5}) temp_img = tempfile.NamedTemporaryFile(suffix='.png', delete=False) plt.savefig(temp_img.name, bbox_inches='tight') plt.close() return temp_img.name, response['message']['content'] def process_video(video_path, prompt, temperature, frame_interval): """处理视频输入""" cap = cv2.VideoCapture(video_path) total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) fps = cap.get(cv2.CAP_PROP_FPS) duration = total_frames / fps results = [] frames_to_process = [] # 提取关键帧 for i in range(0, total_frames, frame_interval): cap.set(cv2.CAP_PROP_POS_FRAMES, i) ret, frame = cap.read() if ret: frames_to_process.append((i, frame)) cap.release() # 处理每一帧 for frame_idx, frame in frames_to_process: # 保存临时图像 temp_frame = tempfile.NamedTemporaryFile(suffix='.jpg', delete=False) cv2.imwrite(temp_frame.name, frame) # 处理图像 _, analysis = process_image(temp_frame.name, f"{prompt} (帧 {frame_idx}/{total_frames})", temperature) results.append({ "frame": frame_idx, "time": frame_idx / fps, "analysis": analysis }) os.unlink(temp_frame.name) # 创建可视化图表 fig, ax = plt.subplots(2, 1, figsize=(10, 8)) # 帧分析结果 times = [r['time'] for r in results] analysis_len = [len(r['analysis']) for r in results] ax[0].plot(times, analysis_len, 'o-') ax[0].set_title("分析结果长度随时间变化") ax[0].set_xlabel("时间 (秒)") ax[0].set_ylabel("分析长度 (字符)") # 关键词频率 keywords = ["人", "物体", "运动", "场景", "颜色"] keyword_counts = {k: 0 for k in keywords} for r in results: for k in keywords: if k in r['analysis']: keyword_counts[k] += 1 ax[1].bar(keyword_counts.keys(), keyword_counts.values(), color='skyblue') ax[1].set_title("关键词频率分析") ax[1].set_ylabel("出现次数") plt.tight_layout() chart_path = tempfile.NamedTemporaryFile(suffix='.png', delete=False).name plt.savefig(chart_path) plt.close() return results, chart_path def process_audio(audio_path, prompt, temperature): """处理音频输入""" # 语音识别 r = sr.Recognizer() with sr.AudioFile(audio_path) as source: audio_data = r.record(source) try: transcript = r.recognize_google(audio_data, language='zh-CN') except sr.UnknownValueError: transcript = "无法识别语音" except sr.RequestError as e: transcript = f"语音识别服务错误: {e}" # 处理文本 response = process_text(f"{prompt}\n语音内容: {transcript}", temperature, 512) # 创建波形图 audio = AudioSegment.from_file(audio_path) samples = np.array(audio.get_array_of_samples()) plt.figure(figsize=(10, 4)) plt.plot(samples, color='blue') plt.title("音频波形图") plt.xlabel("采样点") plt.ylabel("振幅") plt.grid(True, alpha=0.3) waveform_path = tempfile.NamedTemporaryFile(suffix='.png', delete=False).name plt.savefig(waveform_path) plt.close() return transcript, response, waveform_path def process_pdf(pdf_path, prompt, temperature, max_tokens): """处理PDF文档""" text = "" with open(pdf_path, "rb") as f: reader = PdfReader(f) for page in reader.pages: text += page.extract_text() + "\n" # 只取前2000字符避免过长 text = text[:2000] + "..." if len(text) > 2000 else text # 处理文本 response = process_text(f"{prompt}\n文档内容: {text}", temperature, max_tokens) # 创建词云图 from wordcloud import WordCloud wordcloud = WordCloud( font_path='SimHei.ttf', background_color='white', width=800, height=400 ).generate(text) plt.figure(figsize=(10, 5)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.title("文档关键词词云") wordcloud_path = tempfile.NamedTemporaryFile(suffix='.png', delete=False).name plt.savefig(wordcloud_path) plt.close() return text, response, wordcloud_path # ========== 界面构建 ========== def create_interface(): """创建多模态界面""" with gr.Blocks( title="多模态Gemma3应用系统", theme=gr.themes.Soft(primary_hue="teal", secondary_hue="pink"), css=".gradio-container {background-color: #f5f7fa}" ) as app: # 标题和状态栏 gr.Markdown("# 🚀 Gemma3多模态应用系统") with gr.Row(): status_btn = gr.Button("系统状态检查", variant="secondary") status_output = gr.HTML() # 系统配置区域 with gr.Accordion("⚙️ 模型参数配置", open=False): with gr.Row(): temperature = gr.Slider(0.1, 1.0, value=MODEL_CONFIG["temperature"], label="温度", info="控制随机性 (低=确定, 高=创意)") max_tokens = gr.Slider(128, 4096, value=MODEL_CONFIG["max_tokens"], step=128, label="最大Token数", info="控制响应长度") top_p = gr.Slider(0.1, 1.0, value=MODEL_CONFIG["top_p"], label="Top-p采样", info="控制词汇选择范围") system_prompt = gr.Textbox(value=MODEL_CONFIG["system_prompt"], label="系统提示词", lines=2) model_selector = gr.Dropdown(["gemma3:27b", "gemma3:9b", "llama3"], value=MODEL_CONFIG["model"], label="选择模型") # 多模态标签页 with gr.Tabs(): # 文本对话标签页 with gr.Tab("💬 文本对话"): with gr.Row(): with gr.Column(scale=3): text_input = gr.Textbox(label="输入问题", lines=5, placeholder="请输入您的问题...") text_btn = gr.Button("发送", variant="primary") with gr.Column(scale=7): text_output = gr.Textbox(label="模型回复", interactive=False, lines=10) # 图像分析标签页 with gr.Tab("🖼️ 图像分析"): with gr.Row(): with gr.Column(scale=4): img_input = gr.Image(label="上传图像", type="filepath") img_prompt = gr.Textbox(label="分析指令", placeholder="描述您想分析的内容...") img_btn = gr.Button("分析图像", variant="primary") with gr.Column(scale=6): img_output = gr.Image(label="分析结果可视化") img_analysis = gr.Textbox(label="详细分析", interactive=False) # 视频分析标签页 with gr.Tab("🎬 视频分析"): with gr.Row(): with gr.Column(scale=4): video_input = gr.Video(label="上传视频") video_prompt = gr.Textbox(label="分析指令", placeholder="输入视频分析指令...") frame_slider = gr.Slider(10, 100, value=30, step=10, label="帧采样间隔", info="间隔越大处理越快") video_btn = gr.Button("分析视频", variant="primary") with gr.Column(scale=6): video_output = gr.Plot(label="分析结果可视化") video_analysis = gr.JSON(label="帧分析结果") # 语音处理标签页 with gr.Tab("🎧 语音处理"): with gr.Row(): with gr.Column(scale=4): audio_input = gr.Audio(label="上传音频", type="filepath") audio_prompt = gr.Textbox(label="分析指令", placeholder="输入语音分析指令...") audio_btn = gr.Button("处理音频", variant="primary") with gr.Column(scale=6): audio_waveform = gr.Image(label="音频波形") audio_transcript = gr.Textbox(label="语音转写", interactive=False) audio_analysis = gr.Textbox(label="分析结果", interactive=False) # PDF分析标签页 with gr.Tab("📄 文档分析"): with gr.Row(): with gr.Column(scale=4): pdf_input = gr.File(label="上传PDF文档", file_types=[".pdf"]) pdf_prompt = gr.Textbox(label="分析指令", placeholder="输入文档分析指令...") pdf_btn = gr.Button("分析文档", variant="primary") with gr.Column(scale=6): pdf_wordcloud = gr.Image(label="关键词词云") pdf_content = gr.Textbox(label="文档内容摘要", interactive=False) pdf_analysis = gr.Textbox(label="分析结果", interactive=False) # ========== 事件绑定 ========== # 系统状态检查 status_btn.click( fn=lambda: "
".join( [f"{n}: {s}" for n, s, c in system_self_check()] ), outputs=status_output ) # 文本处理 text_btn.click( fn=process_text, inputs=[text_input, temperature, max_tokens], outputs=text_output ) # 图像处理 img_btn.click( fn=process_image, inputs=[img_input, img_prompt, temperature], outputs=[img_output, img_analysis] ) # 视频处理 video_btn.click( fn=process_video, inputs=[video_input, video_prompt, temperature, frame_slider], outputs=[video_analysis, video_output] ) # 音频处理 audio_btn.click( fn=process_audio, inputs=[audio_input, audio_prompt, temperature], outputs=[audio_transcript, audio_analysis, audio_waveform] ) # PDF处理 pdf_btn.click( fn=process_pdf, inputs=[pdf_input, pdf_prompt, temperature, max_tokens], outputs=[pdf_content, pdf_analysis, pdf_wordcloud] ) # 模型参数更新 model_selector.change( fn=lambda x: gr.update(value=x), inputs=model_selector, outputs=model_selector ) return app # ========== 主程序 ========== if __name__ == "__main__": # 系统自检 print("=== 系统自检 ===") for name, status, _ in system_self_check(): print(f"{name}: {status}") # 创建界面 app = create_interface() app.launch( server_name="127.0.0.100", server_port=7860, share=True, favicon_path="favicon.ico" )

filetype

import os import numpy as np import matplotlib.pyplot as plt from PIL import Image import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader, Dataset from torchvision import transforms, models from torchvision.datasets import ImageFolder import torch.nn.functional as F from sklearn.metrics import classification_report, confusion_matrix, accuracy_score import seaborn as sns import gradio as gr import warnings # 设置中文字体 plt.rcParams['font.sans-serif'] = ['Simhei'] plt.rcParams['axes.unicode_minus'] = False warnings.filterwarnings("ignore") # 设置随机种子以确保结果可复现 torch.manual_seed(42) np.random.seed(42) # 数据路径设置(使用原始字符串避免转义问题) data_dir = r"D:\桌面\课设\case_1\photo" train_dir = os.path.join(data_dir, "train") val_dir = os.path.join(data_dir, "validation") model_save_path = "plant_disease_model.pth" # 检查是否有GPU可用 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(f"使用设备: {device}") # 数据集探索函数(增强版) def explore_dataset(): train_dataset = ImageFolder(train_dir) val_dataset = ImageFolder(val_dir) print(f"训练集样本数量: {len(train_dataset)}") print(f"验证集样本数量: {len(val_dataset)}") class_names = train_dataset.classes print(f"类别名称: {class_names}") train_class_counts = {class_names[i]: 0 for i in range(len(class_names))} val_class_counts = {class_names[i]: 0 for i in range(len(class_names))} for img, label in train_dataset: train_class_counts[class_names[label]] += 1 for img, label in val_dataset: val_class_counts[class_names[label]] += 1 print("训练集类别分布:") for class_name, count in train_class_counts.items(): print(f"{class_name}: {count}") print("验证集类别分布:") for class_name, count in val_class_counts.items(): print(f"{class_name}: {count}") # 可视化数据集样本 visualize_dataset_samples(train_dataset, class_names, num_samples=4) # 可视化类别分布 visualize_class_distribution(train_class_counts, val_class_counts, class_names) return class_names, train_class_counts, val_class_counts # 新增:数据集样本可视化函数 def visualize_dataset_samples(dataset, class_names, num_samples=4): """可视化每个类别的样本图片""" plt.figure(figsize=(4 * num_samples, 4 * len(class_names))) for class_idx, class_name in enumerate(class_names): # 获取该类别的所有样本索引 class_indices = [i for i, (_, label) in enumerate(dataset) if label == class_idx] # 随机选择样本 selected_indices = np.random.choice(class_indices, min(num_samples, len(class_indices)), replace=False) for i, sample_idx in enumerate(selected_indices): img, _ = dataset[sample_idx] plt.subplot(len(class_names), num_samples, class_idx * num_samples + i + 1) plt.imshow(img) plt.title(f"{class_name} 样本 {i + 1}") plt.axis('off') plt.tight_layout() plt.savefig('dataset_samples.png', dpi=300, bbox_inches='tight') plt.close() print("已生成数据集样本可视化: dataset_samples.png") # 新增:类别分布可视化函数 def visualize_class_distribution(train_counts, val_counts, class_names): """可视化训练集和验证集的类别分布""" plt.figure(figsize=(12, 5)) # 训练集分布 plt.subplot(1, 2, 1) bars = plt.bar(class_names, [train_counts[cls] for cls in class_names], width=0.4) # 修改柱子宽度 plt.title('训练集类别分布') plt.xlabel('类别') plt.ylabel('样本数量') plt.xticks(rotation=45) # 添加数值标签 for bar in bars: height = bar.get_height() plt.text(bar.get_x() + bar.get_width() / 2., height + 5, f'{int(height)}', ha='center', va='bottom') # 验证集分布 plt.subplot(1, 2, 2) bars = plt.bar(class_names, [val_counts[cls] for cls in class_names], width=0.4) # 修改柱子宽度 plt.title('验证集类别分布') plt.xlabel('类别') plt.ylabel('样本数量') plt.xticks(rotation=45) # 添加数值标签 for bar in bars: height = bar.get_height() plt.text(bar.get_x() + bar.get_width() / 2., height + 2, f'{int(height)}', ha='center', va='bottom') plt.tight_layout() plt.savefig('class_distribution.png', dpi=300, bbox_inches='tight') plt.close() print("已生成类别分布可视化: class_distribution.png") # 数据预处理和增强 def get_data_loaders(batch_size=32): # 定义数据增强和预处理转换 train_transforms = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.RandomRotation(15), transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) val_transforms = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) # 加载数据集 train_dataset = ImageFolder(train_dir, transform=train_transforms) val_dataset = ImageFolder(val_dir, transform=val_transforms) # 创建数据加载器 train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4) val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False, num_workers=4) return train_loader, val_loader, train_dataset.classes # 构建模型 - 使用ResNet-18进行迁移学习 def build_model(num_classes=2): # 加载预训练的ResNet-18模型 model = models.resnet18(pretrained=True) # 冻结所有卷积层的参数 for param in model.parameters(): param.requires_grad = False # 修改最后的全连接层以适应我们的分类任务 model.fc = nn.Sequential( nn.Linear(model.fc.in_features, 256), nn.ReLU(), nn.Dropout(0.5), nn.Linear(256, num_classes) ) return model.to(device) # 训练模型(增强版) def train_model(model, train_loader, val_loader, num_epochs=10): criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.fc.parameters(), lr=0.001) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=3) best_val_acc = 0.0 train_losses = [] val_losses = [] train_accs = [] val_accs = [] learning_rates = [] # 记录学习率变化 for epoch in range(num_epochs): # 训练阶段 model.train() train_loss = 0.0 correct = 0 total = 0 for inputs, labels in train_loader: inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() train_loss += loss.item() * inputs.size(0) _, predicted = outputs.max(1) total += labels.size(0) correct += predicted.eq(labels).sum().item() epoch_train_loss = train_loss / len(train_loader.dataset) epoch_train_acc = 100.0 * correct / total train_losses.append(epoch_train_loss) train_accs.append(epoch_train_acc) # 验证阶段 model.eval() val_loss = 0.0 correct = 0 total = 0 with torch.no_grad(): for inputs, labels in val_loader: inputs, labels = inputs.to(device), labels.to(device) outputs = model(inputs) loss = criterion(outputs, labels) val_loss += loss.item() * inputs.size(0) _, predicted = outputs.max(1) total += labels.size(0) correct += predicted.eq(labels).sum().item() epoch_val_loss = val_loss / len(val_loader.dataset) epoch_val_acc = 100.0 * correct / total val_losses.append(epoch_val_loss) val_accs.append(epoch_val_acc) # 记录当前学习率 learning_rates.append(optimizer.param_groups[0]['lr']) # 学习率调整 scheduler.step(epoch_val_loss) print(f"Epoch {epoch + 1}/{num_epochs} - " f"Train Loss: {epoch_train_loss:.4f} Acc: {epoch_train_acc:.2f}% - " f"Val Loss: {epoch_val_loss:.4f} Acc: {epoch_val_acc:.2f}% - " f"LR: {learning_rates[-1]:.6f}") # 保存最佳模型 if epoch_val_acc > best_val_acc: best_val_acc = epoch_val_acc torch.save(model.state_dict(), model_save_path) print(f"保存最佳模型,验证集准确率: {best_val_acc:.2f}%") # 绘制训练过程可视化(增强版) visualize_training_process(train_losses, val_losses, train_accs, val_accs, learning_rates) return model, train_losses, val_losses, train_accs, val_accs # 新增:训练过程可视化函数 def visualize_training_process(train_losses, val_losses, train_accs, val_accs, learning_rates): """可视化训练过程中的损失、准确率和学习率变化""" fig = plt.figure(figsize=(15, 10)) # 损失曲线 plt.subplot(2, 2, 1) plt.plot(train_losses, label='训练损失', marker='o') plt.plot(val_losses, label='验证损失', marker='x') plt.xlabel('Epoch') plt.ylabel('Loss') plt.title('训练和验证损失曲线') plt.legend() plt.grid(True, linestyle='--', alpha=0.7) # 准确率曲线 plt.subplot(2, 2, 2) plt.plot(train_accs, label='训练准确率', marker='o') plt.plot(val_accs, label='验证准确率', marker='x') plt.xlabel('Epoch') plt.ylabel('准确率 (%)') plt.title('训练和验证准确率曲线') plt.legend() plt.grid(True, linestyle='--', alpha=0.7) # 学习率变化 plt.subplot(2, 2, 3) plt.plot(learning_rates, marker='o', color='red') plt.xlabel('Epoch') plt.ylabel('学习率') plt.title('学习率变化') plt.grid(True, linestyle='--', alpha=0.7) # 损失和准确率对比(归一化) plt.subplot(2, 2, 4) # 归一化损失 train_losses_norm = (np.array(train_losses) - np.min(train_losses)) / (np.max(train_losses) - np.min(train_losses)) val_losses_norm = (np.array(val_losses) - np.min(val_losses)) / (np.max(val_losses) - np.min(val_losses)) # 归一化准确率(反转以便与损失对比) train_accs_norm = 1 - (np.array(train_accs) / 100) val_accs_norm = 1 - (np.array(val_accs) / 100) plt.plot(train_losses_norm, label='训练损失(归一化)', marker='o') plt.plot(val_losses_norm, label='验证损失(归一化)', marker='x') plt.plot(train_accs_norm, label='训练准确率(归一化)', marker='^', linestyle='--') plt.plot(val_accs_norm, label='验证准确率(归一化)', marker='v', linestyle='--') plt.xlabel('Epoch') plt.title('损失与准确率对比(归一化)') plt.legend() plt.grid(True, linestyle='--', alpha=0.7) plt.tight_layout() plt.savefig('training_metrics_enhanced.png', dpi=300, bbox_inches='tight') plt.close() print("已生成增强版训练过程可视化: training_metrics_enhanced.png") # 评估模型(增强版) def evaluate_model(model, val_loader, class_names): model.eval() all_preds = [] all_labels = [] all_images = [] # 保存原始图像用于可视化 all_probs = [] # 保存预测概率用于可视化 with torch.no_grad(): for inputs, labels in val_loader: inputs = inputs.to(device) outputs = model(inputs) probs = F.softmax(outputs, dim=1) _, preds = torch.max(outputs, 1) all_preds.extend(preds.cpu().numpy()) all_labels.extend(labels.cpu().numpy()) all_probs.extend(probs.cpu().numpy()) # 还原预处理的图像(用于可视化) inv_normalize = transforms.Compose([ transforms.Normalize(mean=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.225], std=[1 / 0.229, 1 / 0.224, 1 / 0.225]), ]) for img in inputs: img = inv_normalize(img).cpu().permute(1, 2, 0).numpy() all_images.append(img) # 计算每个类别的准确率 class_acc = {} for i, class_name in enumerate(class_names): mask = (np.array(all_labels) == i) if np.sum(mask) > 0: class_acc[class_name] = np.mean(np.array(all_preds)[mask] == np.array(all_labels)[mask]) * 100 # 生成分类报告 report = classification_report(all_labels, all_preds, target_names=class_names) print("分类报告:\n", report) # 绘制混淆矩阵 cm = confusion_matrix(all_labels, all_preds) plt.figure(figsize=(8, 6)) sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=class_names, yticklabels=class_names) plt.xlabel('预测标签') plt.ylabel('真实标签') plt.title('混淆矩阵') plt.tight_layout() plt.savefig('confusion_matrix.png', dpi=300, bbox_inches='tight') plt.close() print("已生成混淆矩阵: confusion_matrix.png") # 可视化预测结果(增强版) visualize_prediction_results(all_images, all_labels, all_preds, all_probs, class_names) # 可视化每个类别的准确率 visualize_class_accuracy(class_acc, class_names) return class_acc, report # 新增:预测结果可视化函数(增强版) def visualize_prediction_results(images, labels, preds, probs, class_names, num_samples=5): """可视化模型预测结果,包括正确和错误预测""" # 找出正确和错误预测的索引 correct_indices = np.where(np.array(labels) == np.array(preds))[0] wrong_indices = np.where(np.array(labels) != np.array(preds))[0] # 随机选择样本进行可视化 if len(correct_indices) > 0: selected_correct = np.random.choice(correct_indices, min(num_samples, len(correct_indices)), replace=False) else: selected_correct = [] if len(wrong_indices) > 0: selected_wrong = np.random.choice(wrong_indices, min(num_samples, len(wrong_indices)), replace=False) else: selected_wrong = [] # 可视化正确预测 if len(selected_correct) > 0: plt.figure(figsize=(4 * len(selected_correct), 8)) plt.suptitle("模型正确预测示例", fontsize=16) for i, idx in enumerate(selected_correct): # 图像 plt.subplot(2, len(selected_correct), i + 1) plt.imshow(np.clip(images[idx], 0, 1)) # 确保像素值在0-1之间 plt.title(f"真实: {class_names[labels[idx]]}\n预测: {class_names[preds[idx]]}") plt.axis('off') # 预测概率条形图 plt.subplot(2, len(selected_correct), i + 1 + len(selected_correct)) bars = plt.bar(class_names, probs[idx]) bars[preds[idx]].set_color('g') # 预测类别标绿 plt.title(f"置信度: {probs[idx][preds[idx]] * 100:.1f}%") plt.ylim(0, 1) plt.xticks(rotation=45) plt.tight_layout(rect=[0, 0, 1, 0.96]) plt.savefig('correct_predictions.png', dpi=300, bbox_inches='tight') plt.close() print("已生成正确预测可视化: correct_predictions.png") # 可视化错误预测 if len(selected_wrong) > 0: plt.figure(figsize=(4 * len(selected_wrong), 8)) plt.suptitle("模型错误预测示例", fontsize=16) for i, idx in enumerate(selected_wrong): # 图像 plt.subplot(2, len(selected_wrong), i + 1) plt.imshow(np.clip(images[idx], 0, 1)) # 确保像素值在0-1之间 plt.title(f"真实: {class_names[labels[idx]]}\n预测: {class_names[preds[idx]]}") plt.axis('off') # 预测概率条形图 plt.subplot(2, len(selected_wrong), i + 1 + len(selected_wrong)) bars = plt.bar(class_names, probs[idx]) bars[labels[idx]].set_color('r') # 真实类别标红 bars[preds[idx]].set_color('b') # 预测类别标蓝 plt.title(f"置信度: {probs[idx][preds[idx]] * 100:.1f}%") plt.ylim(0, 1) plt.xticks(rotation=45) plt.tight_layout(rect=[0, 0, 1, 0.96]) plt.savefig('wrong_predictions.png', dpi=300, bbox_inches='tight') plt.close() print("已生成错误预测可视化: wrong_predictions.png") # 新增:每个类别准确率可视化函数 def visualize_class_accuracy(class_acc, class_names): """可视化每个类别的准确率""" plt.figure(figsize=(10, 6)) bars = plt.bar(class_names, [class_acc[cls] for cls in class_names], width=0.4) # 修改柱子宽度 # 颜色编码(准确率>80%为绿色,<70%为红色,其余为蓝色) for i, bar in enumerate(bars): acc = class_acc[class_names[i]] if acc > 80: bar.set_color('g') elif acc < 70: bar.set_color('r') else: bar.set_color('b') plt.title('各类别预测准确率') plt.xlabel('类别') plt.ylabel('准确率 (%)') plt.xticks(rotation=45) plt.axhline(y=70, color='r', linestyle='--', alpha=0.7) # 添加70%阈值线 plt.axhline(y=80, color='g', linestyle='--', alpha=0.7) # 添加80%阈值线 # 添加数值标签 for bar in bars: height = bar.get_height() plt.text(bar.get_x() + bar.get_width() / 2., height + 1, f'{height:.1f}%', ha='center', va='bottom') plt.tight_layout() plt.savefig('class_accuracy.png', dpi=300, bbox_inches='tight') plt.close() print("已生成类别准确率可视化: class_accuracy.png") # 预测函数(增强版) def predict_image(model, image_path, class_names): # 加载并预处理图像 transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) image = Image.open(image_path).convert('RGB') img_tensor = transform(image).unsqueeze(0).to(device) # 预测 model.eval() with torch.no_grad(): outputs = model(img_tensor) probs = F.softmax(outputs, dim=1) _, preds = torch.max(outputs, 1) # 获取预测结果和概率 pred_class = class_names[preds.item()] confidence = probs[0][preds.item()].item() # 返回完整的预测结果(包括所有类别的概率) return pred_class, confidence, probs[0].cpu().numpy() # 病害解决方案 def get_solution(pred_class): solutions = { "Healthy": "当前植物状态良好,建议继续保持常规的养护管理:\n" "1. 定期浇水,保持土壤湿润但避免积水\n" "2. 每月施肥一次,使用均衡的复合肥\n" "3. 定期检查植物,预防病虫害的发生", "Late_blight": "检测到晚疫病,建议采取以下措施:\n" "1. 立即移除并销毁受感染的植物部分,防止病害扩散\n" "2. 使用铜基杀菌剂进行喷雾处理,每7-10天一次,连续2-3次\n" "3. 改善种植环境,增强通风,降低湿度\n" "4. 避免在傍晚浇水,减少叶片湿润时间\n" "5. 下一季种植时,选择抗病品种" } return solutions.get(pred_class, "未找到对应的解决方案") # Gradio前端界面(增强版) def create_gradio_interface(model, class_names): def predict_interface(image): if image is None: return "请上传一张植物叶片图片", "0%", "未找到解决方案", None, None # 预测 pred_class, confidence, all_probs = predict_image(model, image, class_names) solution = get_solution(pred_class) confidence_percent = f"{confidence * 100:.2f}%" # 创建预测概率可视化 plt.figure(figsize=(6, 4)) bars = plt.bar(class_names, all_probs) bars[class_names.index(pred_class)].set_color('g') # 预测类别标绿 plt.title("预测概率分布") plt.xlabel("类别") plt.ylabel("概率") plt.ylim(0, 1) plt.xticks(rotation=45) plt.tight_layout() prob_plot_path = "prediction_probability.png" plt.savefig(prob_plot_path, dpi=300, bbox_inches='tight') plt.close() # 创建原始图像可视化 original_img = Image.open(image).convert('RGB') original_img_path = "original_image.png" original_img.save(original_img_path) return pred_class, confidence_percent, solution, original_img_path, prob_plot_path # 创建Gradio界面 with gr.Blocks(title="植物病害识别系统") as interface: gr.Markdown("# 植物病害识别系统") gr.Markdown("上传植物叶片图片,系统将自动识别是否患有晚疫病") with gr.Row(): with gr.Column(): image_input = gr.Image(type="filepath", label="植物叶片图片") predict_button = gr.Button("预测") with gr.Column(): pred_output = gr.Textbox(label="预测结果") conf_output = gr.Textbox(label="置信度") solution_output = gr.Textbox(label="解决方案", lines=10) with gr.Row(): with gr.Column(): original_img_output = gr.Image(label="原始图像") with gr.Column(): prob_plot_output = gr.Image(label="预测概率分布") predict_button.click( fn=predict_interface, inputs=image_input, outputs=[pred_output, conf_output, solution_output, original_img_output, prob_plot_output] ) gr.Markdown("### 系统说明") gr.Markdown("本系统使用深度学习技术识别植物是否患有晚疫病," "可帮助农民早期发现植物病害并采取相应措施。") return interface # 主函数 def main(): # 探索数据集 print("正在探索数据集...") class_names, train_counts, val_counts = explore_dataset() # 加载数据 print("正在加载数据...") train_loader, val_loader, class_names = get_data_loaders(batch_size=32) # 构建模型 print("正在构建模型...") model = build_model(num_classes=len(class_names)) # 检查是否存在已训练的模型 if os.path.exists(model_save_path): print("加载已训练的模型...") model.load_state_dict(torch.load(model_save_path, map_location=device)) else: # 训练模型 print("开始训练模型...") model, train_losses, val_losses, train_accs, val_accs = train_model( model, train_loader, val_loader, num_epochs=15 ) # 评估模型 print("正在评估模型...") class_acc, report = evaluate_model(model, val_loader, class_names) # 检查每个类别的准确率是否满足要求 for class_name, acc in class_acc.items(): print(f"{class_name} 准确率: {acc:.2f}%") if acc < 70: print(f"警告: {class_name} 准确率低于70%,可能需要进一步优化模型") # 构建Gradio界面 print("正在构建Gradio界面...") interface = create_gradio_interface(model, class_names) # 启动Gradio界面 print("启动植物病害识别系统...") interface.launch() if __name__ == "__main__": main() 但是运行这个代码后显示测试集里的植物都患有晚疫病,给我修改后的完整代码

filetype

function [wavelength, radiance, header] = enhancedReadSVC(filepath) % 增强版SVC文件读取函数 - 解决大小无效错误 % 使用更健壮的数据解析方法 % 验证文件存在 if ~exist(filepath, 'file') error('FileNotFound: 文件未找到: %s', filepath); end % 读取整个文件 fid = fopen(filepath, 'r', 'n', 'UTF-8'); fileContent = fread(fid, '*char')'; fclose(fid); % 提取文件头 headerEnd = regexp(fileContent, 'data\s*=', 'once'); if isempty(headerEnd) header = ''; dataSection = fileContent; else header = fileContent(1:headerEnd-1); dataSection = fileContent(headerEnd:end); % 移除"data="标签 dataSection = regexprep(dataSection, 'data\s*=\s*', '', 'once'); end % 改进的数据解析方法 % 分割为行 lines = strsplit(dataSection, {'\r', '\n'}, 'CollapseDelimiters', true); % 初始化数据矩阵 data = []; lineCount = 0; % 处理每一行 for i = 1:length(lines) line = strtrim(lines{i}); % 跳过空行和注释行 if isempty(line) || contains(line, '[') || contains(line, ']') || ~any(isstrprop(line, 'digit')) continue; end % 提取所有数值 tokens = regexp(line, '[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?', 'match'); % 转换为数值 numValues = zeros(1, length(tokens)); validCount = 0; for j = 1:length(tokens) try numValues(j) = str2double(tokens{j}); validCount = validCount + 1; catch % 跳过无效数值 end end % 至少需要波长和辐射值 if validCount >= 2 lineCount = lineCount + 1; data(lineCount, 1:validCount) = numValues(1:validCount); end end % 验证数据 if lineCount < 10 error('InsufficientData: 有效数据点不足 (%d行)', lineCount); end % 尝试不同列作为辐射亮度 candidate_cols = [2, 3, 4]; % 可能的辐射亮度列 valid_col = []; for col = candidate_cols if col <= size(data, 2) colData = data(:, col); validData = colData(~isnan(colData) & ~isinf(colData)); if ~isempty(validData) && max(validData) > 0 && min(validData) >= 0 valid_col = col; break; end end end % 如果未找到有效列,尝试自动检测 if isempty(valid_col) % 寻找包含正值的列 for col = 2:min(6, size(data, 2)) colData = data(:, col); validData = colData(~isnan(colData) & ~isinf(colData)); if ~isempty(validData) && any(validData > 0) valid_col = col; fprintf('警告: 自动选择第%d列作为辐射亮度数据\n', col); break; end end if isempty(valid_col) % 作为最后手段,尝试使用第二列 if size(data, 2) >= 2 valid_col = 2; fprintf('警告: 使用默认第2列作为辐射亮度数据\n'); else error('NoValidRadianceColumn: 未找到有效的辐射亮度列'); end end end % 提取波长和辐射亮度 wavelength = data(:,1); radiance = data(:, valid_col); % 移除NaN值 validIdx = ~isnan(wavelength) & ~isnan(radiance) & ~isinf(radiance); wavelength = wavelength(validIdx); radiance = radiance(validIdx); % 确保数据是列向量 if size(wavelength, 2) > 1 wavelength = wavelength'; end if size(radiance, 2) > 1 radiance = radiance'; end fprintf('文件: %s\n', filepath); fprintf(' 有效数据行数: %d\n', lineCount); fprintf(' 数据列数: %d\n', size(data, 2)); fprintf(' 使用第%d列作为辐射亮度数据\n', valid_col); fprintf(' 波长范围: %.1f - %.1f nm\n', min(wavelength), max(wavelength)); fprintf(' 辐射值范围: %.2e - %.2e\n', min(radiance), max(radiance)); end 需要读取的文件为gr102424_001无遮挡标准版.sig和gr102424_002遮挡标准板1.sig

filetype

已知地面数据为2024年10月24日下午(具体时刻见数据文件内GPS时间),在江苏海洋大学西运动场使用SVC HR-1024i观测的无遮挡、直射遮挡标准板反射的太阳辐射辐射亮度数据(10-10 W cm-2 nm-1 sr-1),分别绘出无遮挡标准板、直射遮挡标准板反射的辐射亮度光谱曲线图。 完成过程: 3)使用上步结果,求算该时刻到达地面的太阳直射辐射被标准板反射的辐射亮度(10-10 W cm-2 nm-1 sr-1)光谱,绘出光谱曲线图。 完成过程: 4)假设标准板在所有观测波长上反射率均为1.0,求算到达地面的相对于水平面(即垂直向下)的太阳直射辐射通量密度(W m-2 nm-1)。 完成过程: 5)已知日地平均距离处太阳辐射通量密度光谱数据(fun_F0.m),数据单位是μW cm-2 nm-1,将该数据读入,将其单位转为W m-2 nm-1,绘出光谱曲线图。 完成过程: 6)使用上步结果及经验公式(日地距离校正因子可用下式计算),求算地面观测时刻,大气层外太阳辐射通量密度数据,绘出光谱曲线图。1 + 0.033 × cos( (360° × td) / 365 ) (1) 其中,td为日序数。 完成过程: 7)什么是太阳天顶角(θz),使用经验公式求算观测时刻太阳天顶角及其余弦。 cos(θz)= sin(φ) sin(δ) +cos(φ) cos(δ) cos(h)(2) 式中,φ是纬度,δ是太阳赤纬角(sin(δ)由公式3计算),h是太阳时角(由公式4计算)。 δ=-23.45摄氏度cos(360度/365(d+10)) (3) h=15(tsolar​−12) (4) 其中,d为日序数,tsolar为太阳时(由公式5近似计算)。 tsolar=tclock +4/60(λstd-λloc) (5) 其中,tclock为时区时间,λstd​是时区标准经度,λloc​是观测点经度。 完成过程: 8)使用前两步结果,求算大气层外相对于水平面(即垂直向下)的太阳辐射通量密度(W m-2 nm-1)光谱。 完成过程: 9)什么是大气质量数,使用经验公式(6)求算观测时刻大气质量数。 m = 1 / [cos(z) + 0.50572 × (96.07995° - z)^(-1.6364)](6) 完成过程: 10)使用前面步骤结果,求算大气总光学厚度光谱,绘制大气总光学厚度光谱曲线,给出550 nm处大气总光学厚度数据。 完成过程: 11)什么是瑞利散射光学厚度,已知观测时刻地面气压数据(1021.2 hPa),使用经验公式计算观测时刻大气瑞利散射光学厚度。 τ_R(λ) = (P/P₀) × (84.35/λ⁴ - 1.255/λ⁵ + 1.40/λ⁶) × 10⁻⁴(7) 其中,P为观测时刻地面气压,P0为海平面标准大气压1013.25 hPa,λ为波长(单位为微米)。 完成过程: 12)使用前面步骤结果,求算大气气溶胶光学厚度光谱,给出光谱曲线,给出550 nm AOD数据。 (可用400 ~ 680、740 ~ 753、870 ~ 883、1020 ~ 1060 nm作为大气吸收作用影响较小的波段。) 数据为gr102424_001无遮挡标准版.sig和gr102424_002遮挡标准板1.sig(位置F:\dingliang bigtest),文件结构如下: gr102424_001无遮挡标准版.sig:/** Spectra Vista SIG Data */ name= gr102424_001.sig instrument= HI: 1142031 (HR-1024i) integration= 40.0, 40.0, 10.0, 30.0, 40.0, 10.0 scan method= Time-based, Time-based scan coadds= 125, 116, 392, 166, 116, 392 scan time= 5, 5 scan settings= AI, AI external data set1= 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 external data set2= 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 external data dark= 0,0,0,0,0,0,0,0 external data mask= 0 optic= FIBER1(2), FIBER1(2) temp= 22.3, -5.0, -9.8, 22.8, -5.0, -9.8 battery= 7.51, 7.45 error= 1, 1 units= Radiance, Radiance time= 2024/10/23 9:47:11, 2024/10/23 9:48:03 longitude= 11912.7438E , 11912.7438E latitude= 3436.6458N , 3436.6453N gpstime= 061740.000 , 061832.000 comm= memory slot= 1, 3 factors= 1.200, 1.170, 1.000 [Overlap: Remove @ 990,1900, Matching Type: None] data= 336.1 19816.09 18797.35 94.86 337.7 22528.28 21318.55 94.63 339.2 24719.48 23327.15 94.37 340.7 27921.59 26424.78 94.64 342.3 30279.36 28805.83 95.13 343.8 31832.10 30374.40 95.42 345.3 31917.38 30307.43 94.96 346.8 34518.50 32761.83 94.91 … 2501.2 969.59 943.38 97.30 2503.4 1059.60 1033.11 97.50 2505.5 1079.60 1025.62 95.00 2507.6 910.80 910.80 100.00 2509.7 844.80 816.64 96.67 2511.8 772.88 744.25 96.30 2513.9 723.88 608.06 84.00 gr102424_002遮挡标准板1.sig: / Spectra Vista SIG Data ***/ name= gr102424_002.sig instrument= HI: 1142031 (HR-1024i) integration= 40.0, 40.0, 10.0, 70.0, 40.0, 10.0 scan method= Time-based, Time-based scan coadds= 125, 116, 392, 71, 116, 392 scan time= 5, 5 scan settings= AI, AI external data set1= 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 external data set2= 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 external data dark= 0,0,0,0,0,0,0,0 external data mask= 0 optic= FIBER1(2), FIBER1(2) temp= 22.3, -5.0, -9.8, 23.1, -5.0, -9.8 battery= 7.51, 7.41 error= 1, 1 units= Radiance, Radiance time= 2024/10/23 9:47:11, 2024/10/23 9:48:42 longitude= 11912.7438E , 11912.7437E latitude= 3436.6458N , 3436.6452N gpstime= 061740.000 , 061911.000 comm= memory slot= 1, 4 factors= 1.200, 1.200, 1.000 [Overlap: Remove @ 990,1900, Matching Type: None] data= 336.1 19816.09 13633.24 68.80 337.7 22528.28 15573.17 69.13 339.2 24719.48 17118.75 69.25 340.7 27921.59 19389.32 69.44 342.3 30279.36 21196.03 70.00 343.8 31832.10 22354.63 70.23 345.3 31917.38 22175.70 69.48 … 2497.0 790.97 102.06 12.90 2499.1 852.72 51.68 6.06 2501.2 969.59 52.41 5.41 2503.4 1059.60 79.47 7.50 2505.5 1079.60 134.95 12.50 2507.6 910.80 138.00 15.15 2509.7 844.80 112.64 13.33 2511.8 772.88 57.25 7.41 2513.9 723.88 202.69 28.00 (读取数据时跳过第二列) 给出每个步骤所需的Matlab函数代码以及每个函数所需的调用代码 ,fun_F0.m的结构大致如下:%https://oceancolor.gsfc.nasa.gov/docs/rsr/f0.txt %fields=wavelength,irradiance; %units=nm,uW/cm^2/nm %missing=-999; % Thuillier, G., M. Hers?, P. C. Simon, D. Labs, H. Mandel, D. Gillotay, % ! and T. Foujols, 2003, "The solar spectral irradiance from 200 % ! to 2400 nm as measured by the SOLSPEC spectrometer from the % ! ATLAS 1-2-3 and EURECA missions, Solar Physics, 214(1): 1-22 %delimiter=space; %wavelength range 200 - 2397nm at 1nm increments; function F0=fun_F0() F0=[200 0.7729 201 0.8143 202 0.8756 203 0.9263 204 1.0335 205 1.1149 206 1.1211 207 1.2612 208 1.4234 209 1.8649 210 2.6086 211 3.1691 212 3.6546 213 3.2686 214 3.9173 … 2381 6.2889 2382 6.2090 2383 6.0941 2384 5.9994 2385 5.9797 2386 5.9948 2387 6.0355 2388 6.0769 2389 6.1063 2390 6.1071 2391 6.0972 2392 6.0766 2393 6.0643 2394 6.0564 2395 6.0574 2396 6.0511 2397 6.0476]; F0(F0==-999)=NaN;

filetype

% 读取SVC HR-1024i光谱数据并绘制光谱曲线 clear; clc; close all; % 文件路径 (根据实际位置修改) filePath = 'F:\dingliang bigtest\gr102424_001无遮挡标准版.sig'; % 读取SVC .sig文件 try % 打开文件 fid = fopen(filePath, 'r'); if fid == -1 error('无法打开文件,请检查文件路径是否正确'); end % 读取文件头信息 headerLines = {}; while ~feof(fid) line = fgetl(fid); % 检查数据开始标记 (SVC文件数据通常以"data="开头) if contains(line, 'data=') break; end headerLines{end+1} = line; end % 提取关键元数据 dateStr = '未知日期'; timeStr = '未知时间'; for i = 1:length(headerLines) if contains(headerLines{i}, 'date') dateStr = extractAfter(headerLines{i}, 'date='); end if contains(headerLines{i}, 'time') timeStr = extractAfter(headerLines{i}, 'time='); end end % 读取光谱数据 (格式: 波长(nm) 辐射亮度) data = textscan(fid, '%f %f', 'Delimiter', '\t', 'CommentStyle', '/'); fclose(fid); % 提取波长和辐射亮度 wavelength = data{1}; radiance = data{2}; % 单位: 10^{-6} W/(cm^2·nm·sr) % 检查数据有效性 if isempty(wavelength) || isempty(radiance) error('未读取到有效数据,请检查文件格式'); end % 绘制光谱曲线 figure('Position', [100, 100, 1000, 600], 'Name', 'SVC HR-1024i光谱数据'); % 主光谱图 subplot(2, 1, 1); plot(wavelength, radiance, 'b-', 'LineWidth', 1.5); title(sprintf('标准板反射太阳辐射亮度 - %s %s', dateStr, timeStr)); xlabel('波长 (nm)'); ylabel('辐射亮度 (10^{-6} W/(cm^2·nm·sr))'); grid on; xlim([min(wavelength), max(wavelength)]); % 设置X轴刻度 (每200nm一个标记) xticks(min(wavelength):200:max(wavelength)); % 添加关键信息标注 text(0.02, 0.95, sprintf('数据点数: %d | 波长范围: %.0f-%.0f nm', ... length(wavelength), min(wavelength), max(wavelength)), ... 'Units', 'normalized', 'BackgroundColor', 'w'); % 可见光波段细节图 subplot(2, 1, 2); visRange = (wavelength >= 400) & (wavelength <= 700); plot(wavelength(visRange), radiance(visRange), 'r-', 'LineWidth', 1.5); title('可见光波段细节 (400-700 nm)'); xlabel('波长 (nm)'); ylabel('辐射亮度'); grid on; xlim([400, 700]); % 添加颜色标记 hold on; colors = {'violet', 'blue', 'cyan', 'green', 'yellow', 'orange', 'red'}; colorPos = linspace(400, 700, 8); for i = 1:7 patch([colorPos(i), colorPos(i+1), colorPos(i+1), colorPos(i)], ... [min(ylim), min(ylim), max(ylim), max(ylim)], ... colors{i}, 'FaceAlpha', 0.1, 'EdgeColor', 'none'); end hold off; % 保存图像 saveas(gcf, 'standard_panel_spectrum.png'); disp('光谱图已保存为 standard_panel_spectrum.png'); catch ME % 错误处理 disp('发生错误:'); disp(ME.message); if exist('fid', 'var') && fid ~= -1 fclose(fid); end end 将上述代码转换为函数形式

filetype

>------ 已启动生成: 项目: opencv_imgproc_SSE4_1, 配置: Release x64 ------ 2>------ 已启动生成: 项目: opencv_imgproc_AVX512_SKX, 配置: Release x64 ------ 3>------ 已启动生成: 项目: opencv_imgproc_AVX2, 配置: Release x64 ------ 4>------ 已启动生成: 项目: opencv_imgproc_AVX, 配置: Release x64 ------ 5>------ 已启动生成: 项目: opencv_features2d_SSE4_1, 配置: Release x64 ------ 6>------ 已启动生成: 项目: opencv_features2d_AVX512_SKX, 配置: Release x64 ------ 7>------ 已启动生成: 项目: opencv_features2d_AVX2, 配置: Release x64 ------ 8>------ 已启动生成: 项目: opencv_dnn_AVX512_SKX, 配置: Release x64 ------ 9>------ 已启动生成: 项目: opencv_dnn_AVX2, 配置: Release x64 ------ 10>------ 已启动生成: 项目: opencv_dnn_AVX, 配置: Release x64 ------ 11>------ 已启动生成: 项目: opencv_cudev, 配置: Release x64 ------ 12>------ 已启动生成: 项目: opencv_core_SSE4_2, 配置: Release x64 ------ 13>------ 已启动生成: 项目: opencv_core_SSE4_1, 配置: Release x64 ------ 14>------ 已启动生成: 项目: opencv_core_AVX512_SKX, 配置: Release x64 ------ 15>------ 已启动生成: 项目: opencv_core_AVX2, 配置: Release x64 ------ 16>------ 已启动生成: 项目: opencv_core_AVX, 配置: Release x64 ------ 1>accum.sse4_1.cpp 1>box_filter.sse4_1.cpp 1>color_hsv.sse4_1.cpp 1>color_rgb.sse4_1.cpp 1>color_yuv.sse4_1.cpp 1>filter.sse4_1.cpp 1>median_blur.sse4_1.cpp 1>morph.sse4_1.cpp 1>smooth.sse4_1.cpp 1>imgwarp.sse4_1.cpp 1>resize.sse4_1.cpp 2>sumpixels.avx512_skx.cpp 5>sift.sse4_1.cpp 6>sift.avx512_skx.cpp 3>accum.avx2.cpp 3>bilateral_filter.avx2.cpp 3>box_filter.avx2.cpp 3>color_hsv.avx2.cpp 3>color_rgb.avx2.cpp 3>color_yuv.avx2.cpp 3>filter.avx2.cpp 3>median_blur.avx2.cpp 3>morph.avx2.cpp 3>smooth.avx2.cpp 3>sumpixels.avx2.cpp 3>imgwarp.avx2.cpp 3>resize.avx2.cpp 8>layers_common.avx512_skx.cpp 9>layers_common.avx2.cpp 4>accum.avx.cpp 4>corner.avx.cpp 10>conv_block.avx.cpp 10>conv_depthwise.avx.cpp 10>conv_winograd_f63.avx.cpp 10>fast_gemm_kernels.avx.cpp 10>layers_common.avx.cpp 7>sift.avx2.cpp 7>fast.avx2.cpp 14>matmul.avx512_skx.cpp 13>arithm.sse4_1.cpp 13>matmul.sse4_1.cpp 15>arithm.avx2.cpp 15>convert.avx2.cpp 15>convert_scale.avx2.cpp 12>stat.sse4_2.cpp 15>count_non_zero.avx2.cpp 15>has_non_zero.avx2.cpp 15>mathfuncs_core.avx2.cpp 15>matmul.avx2.cpp 15>mean.avx2.cpp 15>merge.avx2.cpp 15>split.avx2.cpp 15>stat.avx2.cpp 15>sum.avx2.cpp 11>stub.cpp 6>opencv_features2d_AVX512_SKX.vcxproj -> E:\opencv-build\build\modules\features2d\opencv_features2d_AVX512_SKX.dir\Release\opencv_features2d_AVX512_SKX.lib 17>------ 已启动生成: 项目: opencv_calib3d_AVX2, 配置: Release x64 ------ 16>mathfuncs_core.avx.cpp 5>opencv_features2d_SSE4_1.vcxproj -> E:\opencv-build\build\modules\features2d\opencv_features2d_SSE4_1.dir\Release\opencv_features2d_SSE4_1.lib 9>layers_common.avx2.cpp 17>undistort.avx2.cpp 4>opencv_imgproc_AVX.vcxproj -> E:\opencv-build\build\modules\imgproc\opencv_imgproc_AVX.dir\Release\opencv_imgproc_AVX.lib 18>------ 已启动生成: 项目: gen_opencv_python_source, 配置: Release x64 ------ 2>opencv_imgproc_AVX512_SKX.vcxproj -> E:\opencv-build\build\modules\imgproc\opencv_imgproc_AVX512_SKX.dir\Release\opencv_imgproc_AVX512_SKX.lib 11> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudev4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudev4110.exp 8>layers_common.avx512_skx.cpp 10>opencv_dnn_AVX.vcxproj -> E:\opencv-build\build\modules\dnn\opencv_dnn_AVX.dir\Release\opencv_dnn_AVX.lib 7>opencv_features2d_AVX2.vcxproj -> E:\opencv-build\build\modules\features2d\opencv_features2d_AVX2.dir\Release\opencv_features2d_AVX2.lib 11>opencv_cudev.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudev4110.dll 3>opencv_imgproc_AVX2.vcxproj -> E:\opencv-build\build\modules\imgproc\opencv_imgproc_AVX2.dir\Release\opencv_imgproc_AVX2.lib 12>opencv_core_SSE4_2.vcxproj -> E:\opencv-build\build\modules\core\opencv_core_SSE4_2.dir\Release\opencv_core_SSE4_2.lib 1>opencv_imgproc_SSE4_1.vcxproj -> E:\opencv-build\build\modules\imgproc\opencv_imgproc_SSE4_1.dir\Release\opencv_imgproc_SSE4_1.lib 13>opencv_core_SSE4_1.vcxproj -> E:\opencv-build\build\modules\core\opencv_core_SSE4_1.dir\Release\opencv_core_SSE4_1.lib 15>opencv_core_AVX2.vcxproj -> E:\opencv-build\build\modules\core\opencv_core_AVX2.dir\Release\opencv_core_AVX2.lib 16>opencv_core_AVX.vcxproj -> E:\opencv-build\build\modules\core\opencv_core_AVX.dir\Release\opencv_core_AVX.lib 9>conv_block.avx2.cpp 9>conv_depthwise.avx2.cpp 9>conv_winograd_f63.avx2.cpp 9>fast_gemm_kernels.avx2.cpp 17>opencv_calib3d_AVX2.vcxproj -> E:\opencv-build\build\modules\calib3d\opencv_calib3d_AVX2.dir\Release\opencv_calib3d_AVX2.lib 14>opencv_core_AVX512_SKX.vcxproj -> E:\opencv-build\build\modules\core\opencv_core_AVX512_SKX.dir\Release\opencv_core_AVX512_SKX.lib 19>------ 已启动生成: 项目: opencv_core, 配置: Release x64 ------ 8>opencv_dnn_AVX512_SKX.vcxproj -> E:\opencv-build\build\modules\dnn\opencv_dnn_AVX512_SKX.dir\Release\opencv_dnn_AVX512_SKX.lib 19>cmake_pch.cxx 9>opencv_dnn_AVX2.vcxproj -> E:\opencv-build\build\modules\dnn\opencv_dnn_AVX2.dir\Release\opencv_dnn_AVX2.lib 19>opencl_kernels_core.cpp 19>algorithm.cpp 19>arithm.cpp 19>arithm.dispatch.cpp 19>array.cpp 19>async.cpp 19>batch_distance.cpp 19>bindings_utils.cpp 19>buffer_area.cpp 19>channels.cpp 19>check.cpp 19>command_line_parser.cpp 19>conjugate_gradient.cpp 19>convert.dispatch.cpp 19>convert_c.cpp 19>convert_scale.dispatch.cpp 19>copy.cpp 19>count_non_zero.dispatch.cpp 19>cuda_gpu_mat.cpp 19>cuda_gpu_mat_nd.cpp 19>cuda_host_mem.cpp 19>cuda_info.cpp 19>cuda_stream.cpp 19>datastructs.cpp 19>directx.cpp 19>downhill_simplex.cpp 19>dxt.cpp 19>gl_core_3_1.cpp 19>glob.cpp 19>hal_internal.cpp 19>has_non_zero.dispatch.cpp 19>kmeans.cpp 19>lapack.cpp 19>lda.cpp 19>logger.cpp 19>lpsolver.cpp 19>D:\Visual Studio\VC\Tools\MSVC\14.43.34808\include\xutility(506,82): warning C4267: “参数”: 从“size_t”转换到“const unsigned int”,可能丢失数据 19>(编译源文件“../../../opencv/modules/core/src/cuda_stream.cpp”) 19> D:\Visual Studio\VC\Tools\MSVC\14.43.34808\include\xutility(506,82): 19> 模板实例化上下文(最早的实例化上下文)为 19> E:\opencv-build\opencv\modules\core\src\cuda_stream.cpp(468,13): 19> 查看对正在编译的函数 模板 实例化“cv::Ptr<cv::cuda::Stream::Impl> cv::makePtr<cv::cuda::Stream::Impl,size_t>(const size_t &)”的引用 19> E:\opencv-build\opencv\modules\core\include\opencv2\core\cvstd_wrapper.hpp(146,27): 19> 查看对正在编译的函数 模板 实例化“std::shared_ptr<T> std::make_shared<_Tp,const size_t&>(const size_t &)”的引用 19> with 19> [ 19> T=cv::cuda::Stream::Impl, 19> _Tp=cv::cuda::Stream::Impl 19> ] 19> D:\Visual Studio\VC\Tools\MSVC\14.43.34808\include\memory(2903,46): 19> 查看对正在编译的函数 模板 实例化“std::_Ref_count_obj2<_Ty>::_Ref_count_obj2<const size_t&>(const size_t &)”的引用 19> with 19> [ 19> _Ty=cv::cuda::Stream::Impl 19> ] 19> D:\Visual Studio\VC\Tools\MSVC\14.43.34808\include\memory(2092,18): 19> 查看对正在编译的函数 模板 实例化“void std::_Construct_in_place<_Ty,const size_t&>(_Ty &,const size_t &) noexcept(false)”的引用 19> with 19> [ 19> _Ty=cv::cuda::Stream::Impl 19> ] 19>lut.cpp 19>mathfuncs.cpp 19>mathfuncs_core.dispatch.cpp 19>matmul.dispatch.cpp 19>matrix.cpp 19>matrix_c.cpp 19>matrix_decomp.cpp 19>matrix_expressions.cpp 19>matrix_iterator.cpp 19>matrix_operations.cpp 19>matrix_sparse.cpp 19>matrix_transform.cpp 19>matrix_wrap.cpp 19>mean.dispatch.cpp 19>merge.dispatch.cpp 19>minmax.cpp 19>norm.cpp 19>ocl.cpp 19>opencl_clblas.cpp 19>opencl_clfft.cpp 19>opencl_core.cpp 19>opengl.cpp 19>out.cpp 19>ovx.cpp 19>parallel_openmp.cpp 19>parallel_tbb.cpp 19>parallel_impl.cpp 19>pca.cpp 19>persistence.cpp 19>persistence_base64_encoding.cpp 19>persistence_json.cpp 19>persistence_types.cpp 19>persistence_xml.cpp 19>persistence_yml.cpp 19>rand.cpp 19>softfloat.cpp 19>split.dispatch.cpp 19>stat.dispatch.cpp 19>stat_c.cpp 19>stl.cpp 19>sum.dispatch.cpp 19>system.cpp 19>tables.cpp 19>trace.cpp 19>types.cpp 19>umatrix.cpp 19>datafile.cpp 19>filesystem.cpp 19>logtagconfigparser.cpp 19>logtagmanager.cpp 19>samples.cpp 19>va_intel.cpp 19>alloc.cpp 19>parallel.cpp 19>parallel.cpp 19> 正在创建库 E:/opencv-build/build/lib/Release/opencv_core4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_core4110.exp 19>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 19>opencv_core.vcxproj -> E:\opencv-build\build\bin\Release\opencv_core4110.dll 19>已完成生成项目“opencv_core.vcxproj”的操作。 20>------ 已启动生成: 项目: opencv_version_win32, 配置: Release x64 ------ 21>------ 已启动生成: 项目: opencv_version, 配置: Release x64 ------ 22>------ 已启动生成: 项目: opencv_signal, 配置: Release x64 ------ 23>------ 已启动生成: 项目: opencv_ml, 配置: Release x64 ------ 24>------ 已启动生成: 项目: opencv_imgproc, 配置: Release x64 ------ 25>------ 已启动生成: 项目: opencv_flann, 配置: Release x64 ------ 26>------ 已启动生成: 项目: opencv_cudaarithm, 配置: Release x64 ------ 20>opencv_version.cpp 22>cmake_pch.cxx 23>cmake_pch.cxx 25>cmake_pch.cxx 21>opencv_version.cpp 26>cmake_pch.cxx 24>cmake_pch.cxx 22>opencv_signal_main.cpp 22>signal_resample.cpp 23>opencv_ml_main.cpp 23>ann_mlp.cpp 23>boost.cpp 23>data.cpp 23>em.cpp 23>gbt.cpp 23>inner_functions.cpp 23>kdtree.cpp 23>knearest.cpp 23>lr.cpp 23>nbayes.cpp 23>rtrees.cpp 23>svm.cpp 23>svmsgd.cpp 23>testset.cpp 23>tree.cpp 21>opencv_version.vcxproj -> E:\opencv-build\build\bin\Release\opencv_version.exe 24>opencl_kernels_imgproc.cpp 24>opencv_imgproc_main.cpp 24>accum.cpp 24>accum.dispatch.cpp 24>approx.cpp 24>bilateral_filter.dispatch.cpp 24>blend.cpp 24>box_filter.dispatch.cpp 24>canny.cpp 20>opencv_version_win32.vcxproj -> E:\opencv-build\build\bin\Release\opencv_version_win32.exe 24>clahe.cpp 24>color.cpp 24>color_hsv.dispatch.cpp 24>color_lab.cpp 24>color_rgb.dispatch.cpp 24>color_yuv.dispatch.cpp 24>colormap.cpp 24>connectedcomponents.cpp 24>contours.cpp 24>contours_approx.cpp 24>contours_common.cpp 24>contours_link.cpp 25>opencv_flann_main.cpp 24>contours_new.cpp 24>convhull.cpp 25>flann.cpp 24>corner.cpp 25>miniflann.cpp 24>cornersubpix.cpp 22> 正在创建库 E:/opencv-build/build/lib/Release/opencv_signal4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_signal4110.exp 26>opencv_cudaarithm_main.cpp 24>demosaicing.cpp 26>arithm.cpp 24>deriv.cpp 26>core.cpp 24>distransform.cpp 24>drawing.cpp 24>emd.cpp 24>emd_new.cpp 24>featureselect.cpp 26>element_operations.cpp 24>filter.dispatch.cpp 26>lut.cpp 26>reductions.cpp 24>floodfill.cpp 24>gabor.cpp 24>generalized_hough.cpp 24>geometry.cpp 24>grabcut.cpp 24>hershey_fonts.cpp 24>histogram.cpp 24>hough.cpp 24>imgwarp.cpp 24>intelligent_scissors.cpp 24>intersection.cpp 24>linefit.cpp 24>lsd.cpp 24>main.cpp 23> 正在创建库 E:/opencv-build/build/lib/Release/opencv_ml4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_ml4110.exp 24>matchcontours.cpp 24>median_blur.dispatch.cpp 24>min_enclosing_triangle.cpp 24>moments.cpp 24>morph.dispatch.cpp 24>phasecorr.cpp 24>pyramids.cpp 24>resize.cpp 24>rotcalipers.cpp 24>samplers.cpp 24>segmentation.cpp 24>shapedescr.cpp 24>smooth.dispatch.cpp 24>spatialgradient.cpp 24>stackblur.cpp 22>opencv_signal.vcxproj -> E:\opencv-build\build\bin\Release\opencv_signal4110.dll 24>subdivision2d.cpp 24>sumpixels.dispatch.cpp 24>tables.cpp 24>templmatch.cpp 24>thresh.cpp 24>utils.cpp 23>opencv_ml.vcxproj -> E:\opencv-build\build\bin\Release\opencv_ml4110.dll 26> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudaarithm4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudaarithm4110.exp 26>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 26>opencv_cudaarithm.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudaarithm4110.dll 26>已完成生成项目“opencv_cudaarithm.vcxproj”的操作。 25> 正在创建库 E:/opencv-build/build/lib/Release/opencv_flann4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_flann4110.exp 25>opencv_flann.vcxproj -> E:\opencv-build\build\bin\Release\opencv_flann4110.dll 27>------ 已启动生成: 项目: opencv_surface_matching, 配置: Release x64 ------ 27>cmake_pch.cxx 24> 正在创建库 E:/opencv-build/build/lib/Release/opencv_imgproc4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_imgproc4110.exp 24>opencv_imgproc.vcxproj -> E:\opencv-build\build\bin\Release\opencv_imgproc4110.dll 28>------ 已启动生成: 项目: opencv_reg, 配置: Release x64 ------ 29>------ 已启动生成: 项目: opencv_quality, 配置: Release x64 ------ 30>------ 已启动生成: 项目: opencv_plot, 配置: Release x64 ------ 31>------ 已启动生成: 项目: opencv_phase_unwrapping, 配置: Release x64 ------ 32>------ 已启动生成: 项目: opencv_intensity_transform, 配置: Release x64 ------ 33>------ 已启动生成: 项目: opencv_imgcodecs, 配置: Release x64 ------ 34>------ 已启动生成: 项目: opencv_img_hash, 配置: Release x64 ------ 35>------ 已启动生成: 项目: opencv_hfs, 配置: Release x64 ------ 36>------ 已启动生成: 项目: opencv_fuzzy, 配置: Release x64 ------ 37>------ 已启动生成: 项目: opencv_features2d, 配置: Release x64 ------ 38>------ 已启动生成: 项目: opencv_dnn, 配置: Release x64 ------ 39>------ 已启动生成: 项目: opencv_cudawarping, 配置: Release x64 ------ 40>------ 已启动生成: 项目: opencv_cudafilters, 配置: Release x64 ------ 31>cmake_pch.cxx 30>cmake_pch.cxx 29>cmake_pch.cxx 32>cmake_pch.cxx 28>map.cpp 28>mapaffine.cpp 28>mapper.cpp 28>mappergradaffine.cpp 28>mappergradeuclid.cpp 28>mappergradproj.cpp 28>mappergradshift.cpp 28>mappergradsimilar.cpp 28>mapperpyramid.cpp 28>mapprojec.cpp 28>mapshift.cpp 34>cmake_pch.cxx 36>cmake_pch.cxx 27>opencv_surface_matching_main.cpp 27>icp.cpp 40>cmake_pch.cxx 27>pose_3d.cpp 27>ppf_helpers.cpp 27>ppf_match_3d.cpp 35>cmake_pch.cxx 27>t_hash_int.cpp 38>cmake_pch.cxx 39>cmake_pch.cxx 29>opencv_quality_main.cpp 29>qualitybrisque.cpp 29>qualitygmsd.cpp 34>opencv_img_hash_main.cpp 32>opencv_intensity_transform_main.cpp 31>opencv_phase_unwrapping_main.cpp 30>opencv_plot_main.cpp 29>qualitymse.cpp 29>qualityssim.cpp 34>average_hash.cpp 34>block_mean_hash.cpp 34>color_moment_hash.cpp 31>histogramphaseunwrapping.cpp 32>bimef.cpp 34>img_hash_base.cpp 32>intensity_transform.cpp 30>plot.cpp 34>marr_hildreth_hash.cpp 34>phash.cpp 35>opencv_hfs_main.cpp 34>radial_variance_hash.cpp 35>hfs.cpp 35>hfs_core.cpp 35>magnitude.cpp 36>opencv_fuzzy_main.cpp 36>fuzzy_F0_math.cpp 36>fuzzy_F1_math.cpp 36>fuzzy_image.cpp 35>merge.cpp 35>gslic_engine.cpp 35>slic.cpp 33>cmake_pch.cxx 40>opencv_cudafilters_main.cpp 40>filtering.cpp 27> 正在创建库 E:/opencv-build/build/lib/Release/opencv_surface_matching4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_surface_matching4110.exp 39>opencv_cudawarping_main.cpp 38>opencl_kernels_dnn.cpp 28> 正在创建库 E:/opencv-build/build/lib/Release/opencv_reg4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_reg4110.exp 39>pyramids.cpp 39>remap.cpp 39>resize.cpp 39>warp.cpp 38>opencv_dnn_main.cpp 38>opencv-caffe.pb.cc 38>opencv-onnx.pb.cc 38>attr_value.pb.cc 38>function.pb.cc 38>graph.pb.cc 38>op_def.pb.cc 38>tensor.pb.cc 38>tensor_shape.pb.cc 38>types.pb.cc 38>versions.pb.cc 38>caffe_importer.cpp 38>caffe_io.cpp 38>caffe_shrinker.cpp 38>darknet_importer.cpp 38>darknet_io.cpp 31> 正在创建库 E:/opencv-build/build/lib/Release/opencv_phase_unwrapping4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_phase_unwrapping4110.exp 32> 正在创建库 E:/opencv-build/build/lib/Release/opencv_intensity_transform4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_intensity_transform4110.exp 29> 正在创建库 E:/opencv-build/build/lib/Release/opencv_quality4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_quality4110.exp 27>opencv_surface_matching.vcxproj -> E:\opencv-build\build\bin\Release\opencv_surface_matching4110.dll 38>debug_utils.cpp 28>opencv_reg.vcxproj -> E:\opencv-build\build\bin\Release\opencv_reg4110.dll 30> 正在创建库 E:/opencv-build/build/lib/Release/opencv_plot4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_plot4110.exp 38>dnn.cpp 38>dnn_params.cpp 38>dnn_read.cpp 38>dnn_utils.cpp 32>opencv_intensity_transform.vcxproj -> E:\opencv-build\build\bin\Release\opencv_intensity_transform4110.dll 38>graph_simplifier.cpp 31>opencv_phase_unwrapping.vcxproj -> E:\opencv-build\build\bin\Release\opencv_phase_unwrapping4110.dll 38>halide_scheduler.cpp 38>ie_ngraph.cpp 38>init.cpp 35> 正在创建库 E:/opencv-build/build/lib/Release/opencv_hfs4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_hfs4110.exp 35>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 30>opencv_plot.vcxproj -> E:\opencv-build\build\bin\Release\opencv_plot4110.dll 34> 正在创建库 E:/opencv-build/build/lib/Release/opencv_img_hash4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_img_hash4110.exp 38>layers_rvp052.cpp 38>quantization_utils.cpp 38>layer.cpp 38>layer_factory.cpp 29>opencv_quality.vcxproj -> E:\opencv-build\build\bin\Release\opencv_quality4110.dll 38>accum_layer.cpp 38>arg_layer.cpp 38>attention_layer.cpp 36> 正在创建库 E:/opencv-build/build/lib/Release/opencv_fuzzy4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_fuzzy4110.exp 38>blank_layer.cpp 38>concat_layer.cpp 38>const_layer.cpp 38>correlation_layer.cpp 38>conv_depthwise.cpp 38>conv_winograd_f63.cpp 38>conv_winograd_f63.dispatch.cpp 38>convolution.cpp 38>fast_gemm.cpp 38>fast_norm.cpp 38>softmax.cpp 38>crop_and_resize_layer.cpp 38>cumsum_layer.cpp 38>depth_space_ops_layer.cpp 38>detection_output_layer.cpp 34>opencv_img_hash.vcxproj -> E:\opencv-build\build\bin\Release\opencv_img_hash4110.dll 38>einsum_layer.cpp 38>expand_layer.cpp 33>opencv_imgcodecs_main.cpp 35>opencv_hfs.vcxproj -> E:\opencv-build\build\bin\Release\opencv_hfs4110.dll 39> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudawarping4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudawarping4110.exp 33>bitstrm.cpp 33>exif.cpp 33>grfmt_avif.cpp 39>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 38>flatten_layer.cpp 33>grfmt_base.cpp 38>flow_warp_layer.cpp 38>gather_elements_layer.cpp 33>grfmt_bmp.cpp 33>grfmt_exr.cpp 33>grfmt_gdal.cpp 33>grfmt_gdcm.cpp 33>grfmt_gif.cpp 33>grfmt_hdr.cpp 33>grfmt_jpeg.cpp 38>gather_layer.cpp 38>gemm_layer.cpp 33>grfmt_jpeg2000.cpp 33>grfmt_jpeg2000_openjpeg.cpp 40> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudafilters4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudafilters4110.exp 33>grfmt_jpegxl.cpp 40>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 38>group_norm_layer.cpp 33>grfmt_pam.cpp 38>instance_norm_layer.cpp 33>grfmt_pfm.cpp 33>grfmt_png.cpp 33>grfmt_pxm.cpp 33>grfmt_spng.cpp 36>opencv_fuzzy.vcxproj -> E:\opencv-build\build\bin\Release\opencv_fuzzy4110.dll 33>grfmt_sunras.cpp 33>grfmt_tiff.cpp 33>grfmt_webp.cpp 38>layer_norm.cpp 38>layers_common.cpp 33>loadsave.cpp 33>rgbe.cpp 33>utils.cpp 38>lrn_layer.cpp 38>matmul_layer.cpp 38>max_unpooling_layer.cpp 38>mvn_layer.cpp 38>nary_eltwise_layers.cpp 38>normalize_bbox_layer.cpp 38>not_implemented_layer.cpp 38>padding_layer.cpp 38>permute_layer.cpp 38>prior_box_layer.cpp 39>opencv_cudawarping.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudawarping4110.dll 39>已完成生成项目“opencv_cudawarping.vcxproj”的操作。 35>已完成生成项目“opencv_hfs.vcxproj”的操作。 38>proposal_layer.cpp 38>recurrent_layers.cpp 38>reduce_layer.cpp 38>region_layer.cpp 38>reorg_layer.cpp 38>reshape_layer.cpp 38>resize_layer.cpp 38>scatterND_layer.cpp 38>scatter_layer.cpp 40>opencv_cudafilters.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudafilters4110.dll 38>shuffle_channel_layer.cpp 38>slice_layer.cpp 33>LINK : fatal error LNK1181: 无法打开输入文件“E:\Anaconda\Library\bin\avif.dll” 38>split_layer.cpp 40>已完成生成项目“opencv_cudafilters.vcxproj”的操作。 38>tile_layer.cpp 33>已完成生成项目“opencv_imgcodecs.vcxproj”的操作 - 失败。 41>------ 已启动生成: 项目: opencv_videoio, 配置: Release x64 ------ 42>------ 已启动生成: 项目: opencv_cudaimgproc, 配置: Release x64 ------ 38>topk_layer.cpp 38>legacy_backend.cpp 38>model.cpp 38>net.cpp 38>net_cann.cpp 37>cmake_pch.cxx 38>net_impl_backend.cpp 38>net_impl.cpp 38>net_impl_fuse.cpp 38>net_openvino.cpp 38>net_quantization.cpp 38>nms.cpp 38>common.cpp 38>math_functions.cpp 38>ocl4dnn_conv_spatial.cpp 38>ocl4dnn_inner_product.cpp 38>ocl4dnn_lrn.cpp 38>ocl4dnn_pool.cpp 38>ocl4dnn_softmax.cpp 38>onnx_graph_simplifier.cpp 38>onnx_importer.cpp 41>cmake_pch.cxx 38>op_cann.cpp 38>op_cuda.cpp 38>op_halide.cpp 38>op_inf_engine.cpp 38>op_timvx.cpp 38>op_vkcom.cpp 38>op_webnn.cpp 38>registry.cpp 38>tf_graph_simplifier.cpp 38>tf_importer.cpp 42>cmake_pch.cxx 38>tf_io.cpp 38>tflite_importer.cpp 38>THDiskFile.cpp 38>THFile.cpp 38>THGeneral.cpp 38>torch_importer.cpp 38>conv_1x1_fast_spv.cpp 38>conv_depthwise_3x3_spv.cpp 38>conv_depthwise_spv.cpp 38>conv_implicit_gemm_spv.cpp 38>gemm_spv.cpp 38>nary_eltwise_binary_forward_spv.cpp 38>spv_shader.cpp 38>buffer.cpp 38>command.cpp 38>context.cpp 38>fence.cpp 38>internal.cpp 37>opencl_kernels_features2d.cpp 37>opencv_features2d_main.cpp 37>affine_feature.cpp 38>op_base.cpp 38>op_conv.cpp 37>agast.cpp 37>agast_score.cpp 37>akaze.cpp 37>bagofwords.cpp 37>blobdetector.cpp 37>brisk.cpp 37>draw.cpp 37>dynamic.cpp 38>op_matmul.cpp 38>op_naryEltwise.cpp 38>pipeline.cpp 38>tensor.cpp 37>evaluation.cpp 37>fast.cpp 37>fast_score.cpp 38>vk_functions.cpp 37>feature2d.cpp 37>gftt.cpp 38>vk_loader.cpp 37>kaze.cpp 37>AKAZEFeatures.cpp 37>KAZEFeatures.cpp 37>fed.cpp 37>nldiffusion_functions.cpp 37>keypoint.cpp 37>main.cpp 37>matchers.cpp 37>mser.cpp 37>orb.cpp 37>sift.dispatch.cpp 42>opencv_cudaimgproc_main.cpp 42>bilateral_filter.cpp 42>blend.cpp 42>canny.cpp 42>color.cpp 42>connectedcomponents.cpp 42>corners.cpp 42>generalized_hough.cpp 42>gftt.cpp 42>histogram.cpp 42>hough_circles.cpp 42>hough_lines.cpp 42>hough_segments.cpp 42>match_template.cpp 42>mean_shift.cpp 42>moments.cpp 42>mssegmentation.cpp 41>opencv_videoio_main.cpp 41>backend_static.cpp 41>cap.cpp 41>cap_dshow.cpp 41>cap_images.cpp 41>cap_mjpeg_decoder.cpp 41>cap_mjpeg_encoder.cpp 41>cap_msmf.cpp 41>obsensor_stream_channel_msmf.cpp 41>obsensor_uvc_stream_channel.cpp 41>cap_obsensor_capture.cpp 41>container_avi.cpp 41>videoio_c.cpp 41>videoio_registry.cpp 38>backend.cpp 42> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudaimgproc4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudaimgproc4110.exp 42>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 42>opencv_cudaimgproc.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudaimgproc4110.dll 42>已完成生成项目“opencv_cudaimgproc.vcxproj”的操作。 43>------ 已启动生成: 项目: opencv_photo, 配置: Release x64 ------ 37> 正在创建库 E:/opencv-build/build/lib/Release/opencv_features2d4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_features2d4110.exp 43>cmake_pch.cxx 41>backend_plugin.cpp 37>opencv_features2d.vcxproj -> E:\opencv-build\build\bin\Release\opencv_features2d4110.dll 44>------ 已启动生成: 项目: opencv_saliency, 配置: Release x64 ------ 45>------ 已启动生成: 项目: opencv_line_descriptor, 配置: Release x64 ------ 46>------ 已启动生成: 项目: opencv_cudafeatures2d, 配置: Release x64 ------ 47>------ 已启动生成: 项目: opencv_calib3d, 配置: Release x64 ------ 38>batch_norm_layer.cpp 44>cmake_pch.cxx 45>cmake_pch.cxx 46>cmake_pch.cxx 47>cmake_pch.cxx 38>convolution_layer.cpp 41>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_imgcodecs4110.lib” 41>已完成生成项目“opencv_videoio.vcxproj”的操作 - 失败。 48>------ 已启动生成: 项目: opencv_highgui, 配置: Release x64 ------ 49>------ 已启动生成: 项目: opencv_cudacodec, 配置: Release x64 ------ 43>opencl_kernels_photo.cpp 43>opencv_photo_main.cpp 43>align.cpp 43>calibrate.cpp 43>contrast_preserve.cpp 43>denoise_tvl1.cpp 43>denoising.cpp 43>denoising.cuda.cpp 43>hdr_common.cpp 43>inpaint.cpp 43>merge.cpp 43>npr.cpp 43>seamless_cloning.cpp 43>seamless_cloning_impl.cpp 43>tonemap.cpp 48>cmake_pch.cxx 49>cmake_pch.cxx 44>opencv_saliency_main.cpp 44>CmFile.cpp 44>CmShow.cpp 44>FilterTIG.cpp 44>ValStructVec.cpp 44>objectnessBING.cpp 44>motionSaliency.cpp 44>motionSaliencyBinWangApr2014.cpp 44>objectness.cpp 44>saliency.cpp 44>staticSaliency.cpp 44>staticSaliencyFineGrained.cpp 44>staticSaliencySpectralResidual.cpp 47>opencl_kernels_calib3d.cpp 47>opencv_calib3d_main.cpp 47>ap3p.cpp 47>calibinit.cpp 47>calibration.cpp 47>calibration_base.cpp 45>opencv_line_descriptor_main.cpp 47>calibration_handeye.cpp 45>LSDDetector.cpp 45>binary_descriptor.cpp 47>checkchessboard.cpp 47>chessboard.cpp 47>circlesgrid.cpp 45>binary_descriptor_matcher.cpp 47>compat_ptsetreg.cpp 47>dls.cpp 47>epnp.cpp 47>fisheye.cpp 47>five-point.cpp 45>draw.cpp 47>fundam.cpp 47>homography_decomp.cpp 47>ippe.cpp 47>levmarq.cpp 46>opencv_cudafeatures2d_main.cpp 46>brute_force_matcher.cpp 46>fast.cpp 47>main.cpp 46>feature2d_async.cpp 47>p3p.cpp 46>orb.cpp 47>polynom_solver.cpp 38>elementwise_layers.cpp 47>ptsetreg.cpp 47>quadsubpix.cpp 47>rho.cpp 47>solvepnp.cpp 47>sqpnp.cpp 47>stereo_geom.cpp 47>stereobm.cpp 47>stereosgbm.cpp 47>triangulate.cpp 47>undistort.dispatch.cpp 47>upnp.cpp 47>bundle.cpp 47>degeneracy.cpp 47>dls_solver.cpp 47>essential_solver.cpp 47>estimator.cpp 47>fundamental_solver.cpp 45> 正在创建库 E:/opencv-build/build/lib/Release/opencv_line_descriptor4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_line_descriptor4110.exp 44> 正在创建库 E:/opencv-build/build/lib/Release/opencv_saliency4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_saliency4110.exp 47>gamma_values.cpp 47>homography_solver.cpp 47>local_optimization.cpp 47>pnp_solver.cpp 46> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudafeatures2d4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudafeatures2d4110.exp 47>quality.cpp 38>eltwise_layer.cpp 47>ransac_solvers.cpp 47>sampler.cpp 46>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 43> 正在创建库 E:/opencv-build/build/lib/Release/opencv_photo4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_photo4110.exp 47>termination.cpp 43>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 47>utils.cpp 49>E:\opencv-build\opencv_contrib\modules\cudacodec\src\video_decoder.hpp(107,118): error C2065: “cudaVideoSurfaceFormat_YUV444”: 未声明的标识符 49>(编译源文件“CMakeFiles/opencv_cudacodec.dir/cmake_pch.cxx”) 49>E:\opencv-build\opencv_contrib\modules\cudacodec\src\video_decoder.hpp(107,19): error C2737: “type”: 必须初始化 const 对象 49>(编译源文件“CMakeFiles/opencv_cudacodec.dir/cmake_pch.cxx”) 49>已完成生成项目“opencv_cudacodec.vcxproj”的操作 - 失败。 45>opencv_line_descriptor.vcxproj -> E:\opencv-build\build\bin\Release\opencv_line_descriptor4110.dll 44>opencv_saliency.vcxproj -> E:\opencv-build\build\bin\Release\opencv_saliency4110.dll 46>opencv_cudafeatures2d.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudafeatures2d4110.dll 43>opencv_photo.vcxproj -> E:\opencv-build\build\bin\Release\opencv_photo4110.dll 48>opencv_highgui_main.cpp 48>backend.cpp 48>roiSelector.cpp 48>window.cpp 48>window_w32.cpp 43>已完成生成项目“opencv_photo.vcxproj”的操作。 50>------ 已启动生成: 项目: opencv_xphoto, 配置: Release x64 ------ 46>已完成生成项目“opencv_cudafeatures2d.vcxproj”的操作。 50>bm3d_image_denoising.cpp 50>dct_image_denoising.cpp 50>grayworld_white_balance.cpp 50>inpainting.cpp 50>learning_based_color_balance.cpp 50>oilpainting.cpp 38>fully_connected_layer.cpp 50>simple_color_balance.cpp 50>tonemap.cpp 48>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_videoio4110.lib” 48>已完成生成项目“opencv_highgui.vcxproj”的操作 - 失败。 51>------ 已启动生成: 项目: opencv_visualisation, 配置: Release x64 ------ 52>------ 已启动生成: 项目: opencv_ts, 配置: Release x64 ------ 53>------ 已启动生成: 项目: opencv_bioinspired, 配置: Release x64 ------ 54>------ 已启动生成: 项目: opencv_annotation, 配置: Release x64 ------ 51>opencv_visualisation.cpp 54>opencv_annotation.cpp 52>cmake_pch.cxx 53>cmake_pch.cxx 38>pooling_layer.cpp 38>scale_layer.cpp 53>opencl_kernels_bioinspired.cpp 53>opencv_bioinspired_main.cpp 53>basicretinafilter.cpp 53>imagelogpolprojection.cpp 53>magnoretinafilter.cpp 53>parvoretinafilter.cpp 53>retina.cpp 53>retina_ocl.cpp 53>retinacolor.cpp 53>retinafasttonemapping.cpp 53>retinafilter.cpp 53>transientareassegmentationmodule.cpp 54>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_highgui4110.lib” 54>已完成生成项目“opencv_annotation.vcxproj”的操作 - 失败。 52>cuda_perf.cpp 52>cuda_test.cpp 52>ocl_perf.cpp 52>ocl_test.cpp 52>ts.cpp 52>ts_arrtest.cpp 52>ts_func.cpp 52>ts_gtest.cpp 52>ts_perf.cpp 52>ts_tags.cpp 38>softmax_layer.cpp 53>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_highgui4110.lib” 51>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_highgui4110.lib” 53>已完成生成项目“opencv_bioinspired.vcxproj”的操作 - 失败。 51>已完成生成项目“opencv_visualisation.vcxproj”的操作 - 失败。 38>batch_norm_layer.cpp 50> 正在创建库 E:/opencv-build/build/lib/Release/opencv_xphoto4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_xphoto4110.exp 50>opencv_xphoto.vcxproj -> E:\opencv-build\build\bin\Release\opencv_xphoto4110.dll 38>convolution_layer.cpp 52>opencv_ts.vcxproj -> E:\opencv-build\build\lib\Release\opencv_ts4110.lib 38>elementwise_layers.cpp 38>eltwise_layer.cpp 38>fully_connected_layer.cpp 38>pooling_layer.cpp 47> 正在创建库 E:/opencv-build/build/lib/Release/opencv_calib3d4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_calib3d4110.exp 47>opencv_calib3d.vcxproj -> E:\opencv-build\build\bin\Release\opencv_calib3d4110.dll 55>------ 已启动生成: 项目: opencv_structured_light, 配置: Release x64 ------ 56>------ 已启动生成: 项目: opencv_shape, 配置: Release x64 ------ 57>------ 已启动生成: 项目: opencv_rgbd, 配置: Release x64 ------ 58>------ 已启动生成: 项目: opencv_rapid, 配置: Release x64 ------ 59>------ 已启动生成: 项目: opencv_cudastereo, 配置: Release x64 ------ 60>------ 已启动生成: 项目: opencv_ccalib, 配置: Release x64 ------ 55>cmake_pch.cxx 56>cmake_pch.cxx 57>cmake_pch.cxx 58>cmake_pch.cxx 60>cmake_pch.cxx 59>cmake_pch.cxx 38>scale_layer.cpp 58>opencv_rapid_main.cpp 55>opencv_structured_light_main.cpp 58>histogram.cpp 58>rapid.cpp 55>graycodepattern.cpp 55>sinusoidalpattern.cpp 56>opencv_shape_main.cpp 56>aff_trans.cpp 56>emdL1.cpp 56>haus_dis.cpp 56>hist_cost.cpp 56>sc_dis.cpp 60>opencv_ccalib_main.cpp 56>tps_trans.cpp 60>ccalib.cpp 60>multicalib.cpp 60>omnidir.cpp 60>randpattern.cpp 59>opencv_cudastereo_main.cpp 57>opencl_kernels_rgbd.cpp 59>disparity_bilateral_filter.cpp 57>opencv_rgbd_main.cpp 59>stereobm.cpp 57>colored_kinfu.cpp 57>colored_tsdf.cpp 57>depth_cleaner.cpp 57>depth_registration.cpp 57>depth_to_3d.cpp 57>dqb.cpp 57>dynafu.cpp 57>dynafu_tsdf.cpp 59>stereobp.cpp 59>stereocsbp.cpp 57>fast_icp.cpp 59>stereosgm.cpp 57>hash_tsdf.cpp 59>util.cpp 57>kinfu.cpp 57>kinfu_frame.cpp 57>large_kinfu.cpp 57>linemod.cpp 57>nonrigid_icp.cpp 57>normal.cpp 57>odometry.cpp 57>plane.cpp 57>pose_graph.cpp 57>tsdf.cpp 57>tsdf_functions.cpp 57>utils.cpp 57>volume.cpp 57>warpfield.cpp 38>softmax_layer.cpp 58> 正在创建库 E:/opencv-build/build/lib/Release/opencv_rapid4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_rapid4110.exp 55> 正在创建库 E:/opencv-build/build/lib/Release/opencv_structured_light4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_structured_light4110.exp 56> 正在创建库 E:/opencv-build/build/lib/Release/opencv_shape4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_shape4110.exp 59> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudastereo4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudastereo4110.exp 59>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 55>opencv_structured_light.vcxproj -> E:\opencv-build\build\bin\Release\opencv_structured_light4110.dll 58>opencv_rapid.vcxproj -> E:\opencv-build\build\bin\Release\opencv_rapid4110.dll 56>opencv_shape.vcxproj -> E:\opencv-build\build\bin\Release\opencv_shape4110.dll 61>------ 已启动生成: 项目: opencv_xfeatures2d, 配置: Release x64 ------ 59>opencv_cudastereo.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudastereo4110.dll 59>已完成生成项目“opencv_cudastereo.vcxproj”的操作。 60>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_highgui4110.lib” 60>已完成生成项目“opencv_ccalib.vcxproj”的操作 - 失败。 61>cmake_pch.cxx 61>opencl_kernels_xfeatures2d.cpp 61>opencv_xfeatures2d_main.cpp 61>affine_feature2d.cpp 61>beblid.cpp 61>brief.cpp 61>daisy.cpp 61>ellipticKeyPoint.cpp 61>fast.cpp 61>freak.cpp 61>gms.cpp 61>harris_lapace_detector.cpp 61>latch.cpp 61>Match.cpp 61>Point.cpp 61>PointPair.cpp 61>lucid.cpp 61>msd.cpp 61>pct_signatures.cpp 61>grayscale_bitmap.cpp 61>pct_clusterizer.cpp 61>pct_sampler.cpp 61>pct_signatures_sqfd.cpp 61>stardetector.cpp 61>surf.cpp 61>surf.cuda.cpp 61>surf.ocl.cpp 61>tbmr.cpp 61>xfeatures2d_init.cpp 57> 正在创建库 E:/opencv-build/build/lib/Release/opencv_rgbd4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_rgbd4110.exp 38> 正在创建库 E:/opencv-build/build/lib/Release/opencv_dnn4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_dnn4110.exp 38>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 57>opencv_rgbd.vcxproj -> E:\opencv-build\build\bin\Release\opencv_rgbd4110.dll 38>opencv_dnn.vcxproj -> E:\opencv-build\build\bin\Release\opencv_dnn4110.dll 38>已完成生成项目“opencv_dnn.vcxproj”的操作。 62>------ 已启动生成: 项目: opencv_video, 配置: Release x64 ------ 63>------ 已启动生成: 项目: opencv_text, 配置: Release x64 ------ 64>------ 已启动生成: 项目: opencv_objdetect, 配置: Release x64 ------ 65>------ 已启动生成: 项目: opencv_model_diagnostics, 配置: Release x64 ------ 66>------ 已启动生成: 项目: opencv_mcc, 配置: Release x64 ------ 67>------ 已启动生成: 项目: opencv_dnn_superres, 配置: Release x64 ------ 68>------ 已启动生成: 项目: opencv_dnn_objdetect, 配置: Release x64 ------ 63>cmake_pch.cxx 62>cmake_pch.cxx 65>model_diagnostics.cpp 64>cmake_pch.cxx 66>cmake_pch.cxx 67>cmake_pch.cxx 68>cmake_pch.cxx 63>opencv_text_main.cpp 63>erfilter.cpp 63>ocr_beamsearch_decoder.cpp 63>ocr_hmm_decoder.cpp 63>ocr_holistic.cpp 63>ocr_tesseract.cpp 63>text_detectorCNN.cpp 63>text_detector_swt.cpp 62>opencl_kernels_video.cpp 64>opencl_kernels_objdetect.cpp 62>opencv_video_main.cpp 62>bgfg_KNN.cpp 62>bgfg_gaussmix2.cpp 64>opencv_objdetect_main.cpp 64>apriltag_quad_thresh.cpp 62>camshift.cpp 64>zmaxheap.cpp 64>aruco_board.cpp 64>aruco_detector.cpp 64>aruco_dictionary.cpp 62>dis_flow.cpp 64>aruco_utils.cpp 64>charuco_detector.cpp 62>ecc.cpp 62>kalman.cpp 68>opencv_dnn_objdetect_main.cpp 62>lkpyramid.cpp 62>optflowgf.cpp 62>optical_flow_io.cpp 64>barcode.cpp 64>abs_decoder.cpp 62>tracker_feature.cpp 64>hybrid_binarizer.cpp 64>super_scale.cpp 64>utils.cpp 64>ean13_decoder.cpp 62>tracker_feature_set.cpp 64>ean8_decoder.cpp 64>upcean_decoder.cpp 62>tracker_mil_model.cpp 68>core_detect.cpp 62>tracker_mil_state.cpp 62>tracker_model.cpp 64>bardetect.cpp 62>tracker_sampler.cpp 62>tracker_sampler_algorithm.cpp 62>tracker_state_estimator.cpp 62>tracking_feature.cpp 64>cascadedetect.cpp 62>tracking_online_mil.cpp 64>cascadedetect_convert.cpp 64>detection_based_tracker.cpp 62>tracker.cpp 64>face_detect.cpp 62>tracker_dasiamrpn.cpp 64>face_recognize.cpp 62>tracker_goturn.cpp 64>graphical_code_detector.cpp 64>hog.cpp 62>tracker_mil.cpp 67>opencv_dnn_superres_main.cpp 64>main.cpp 64>qrcode.cpp 64>qrcode_encoder.cpp 62>tracker_nano.cpp 62>tracker_vit.cpp 62>variational_refinement.cpp 67>dnn_superres.cpp 66>opencv_mcc_main.cpp 66>bound_min.cpp 66>ccm.cpp 66>charts.cpp 66>checker_detector.cpp 66>checker_model.cpp 66>color.cpp 66>colorspace.cpp 66>common.cpp 66>debug.cpp 66>distance.cpp 66>graph_cluster.cpp 66>io.cpp 66>linearize.cpp 66>mcc.cpp 66>operations.cpp 66>utils.cpp 66>wiener_filter.cpp 65>opencv_model_diagnostics.vcxproj -> E:\opencv-build\build\bin\Release\opencv_model_diagnostics.exe 68>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_highgui4110.lib” 68>已完成生成项目“opencv_dnn_objdetect.vcxproj”的操作 - 失败。 67> 正在创建库 E:/opencv-build/build/lib/Release/opencv_dnn_superres4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_dnn_superres4110.exp 67>opencv_dnn_superres.vcxproj -> E:\opencv-build\build\bin\Release\opencv_dnn_superres4110.dll 63> 正在创建库 E:/opencv-build/build/lib/Release/opencv_text4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_text4110.exp 62> 正在创建库 E:/opencv-build/build/lib/Release/opencv_video4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_video4110.exp 63>opencv_text.vcxproj -> E:\opencv-build\build\bin\Release\opencv_text4110.dll 69>------ 已启动生成: 项目: opencv_datasets, 配置: Release x64 ------ 62>opencv_video.vcxproj -> E:\opencv-build\build\bin\Release\opencv_video4110.dll 70>------ 已启动生成: 项目: opencv_ximgproc, 配置: Release x64 ------ 71>------ 已启动生成: 项目: opencv_cudabgsegm, 配置: Release x64 ------ 72>------ 已启动生成: 项目: opencv_bgsegm, 配置: Release x64 ------ 69>ar_hmdb.cpp 71>cmake_pch.cxx 69>ar_sports.cpp 69>dataset.cpp 69>fr_adience.cpp 72>cmake_pch.cxx 69>fr_lfw.cpp 69>gr_chalearn.cpp 69>gr_skig.cpp 69>hpe_humaneva.cpp 69>hpe_parse.cpp 70>cmake_pch.cxx 69>ir_affine.cpp 69>ir_robot.cpp 69>is_bsds.cpp 69>is_weizmann.cpp 69>msm_epfl.cpp 69>msm_middlebury.cpp 69>or_imagenet.cpp 66> 正在创建库 E:/opencv-build/build/lib/Release/opencv_mcc4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_mcc4110.exp 69>or_mnist.cpp 66>opencv_mcc.vcxproj -> E:\opencv-build\build\bin\Release\opencv_mcc4110.dll 69>or_pascal.cpp 69>or_sun.cpp 69>pd_caltech.cpp 69>pd_inria.cpp 69>slam_kitti.cpp 69>slam_tumindoor.cpp 69>sr_bsds.cpp 69>sr_div2k.cpp 69>sr_general100.cpp 69>tr_chars.cpp 69>tr_icdar.cpp 69>tr_svt.cpp 64> 正在创建库 E:/opencv-build/build/lib/Release/opencv_objdetect4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_objdetect4110.exp 69>track_alov.cpp 69>track_vot.cpp 69>util.cpp 71>opencv_cudabgsegm_main.cpp 71>mog.cpp 71>mog2.cpp 64>opencv_objdetect.vcxproj -> E:\opencv-build\build\bin\Release\opencv_objdetect4110.dll 73>------ 已启动生成: 项目: opencv_xobjdetect, 配置: Release x64 ------ 74>------ 已启动生成: 项目: opencv_wechat_qrcode, 配置: Release x64 ------ 75>------ 已启动生成: 项目: opencv_interactive-calibration, 配置: Release x64 ------ 76>------ 已启动生成: 项目: opencv_face, 配置: Release x64 ------ 77>------ 已启动生成: 项目: opencv_cudalegacy, 配置: Release x64 ------ 78>------ 已启动生成: 项目: opencv_aruco, 配置: Release x64 ------ 70>opencl_kernels_ximgproc.cpp 70>opencv_ximgproc_main.cpp 70>adaptive_manifold_filter_n.cpp 70>anisodiff.cpp 70>bilateral_texture_filter.cpp 70>brightedges.cpp 70>deriche_filter.cpp 70>disparity_filters.cpp 70>domain_transform.cpp 70>dtfilter_cpu.cpp 70>edge_drawing.cpp 70>edgeaware_filters_common.cpp 70>edgeboxes.cpp 70>edgepreserving_filter.cpp 70>estimated_covariance.cpp 70>fast_hough_transform.cpp 70>fast_line_detector.cpp 70>fbs_filter.cpp 70>fgs_filter.cpp 70>find_ellipses.cpp 70>fourier_descriptors.cpp 70>graphsegmentation.cpp 70>guided_filter.cpp 72>opencv_bgsegm_main.cpp 72>bgfg_gaussmix.cpp 72>bgfg_gmg.cpp 72>bgfg_gsoc.cpp 72>bgfg_subcnt.cpp 70>joint_bilateral_filter.cpp 76>cmake_pch.cxx 70>l0_smooth.cpp 70>lsc.cpp 70>niblack_thresholding.cpp 70>paillou_filter.cpp 75>calibController.cpp 70>peilin.cpp 70>quaternion.cpp 70>radon_transform.cpp 75>calibPipeline.cpp 75>frameProcessor.cpp 72>synthetic_seq.cpp 73>cmake_pch.cxx 75>main.cpp 70>ridgedetectionfilter.cpp 75>parametersController.cpp 70>rolling_guidance_filter.cpp 70>scansegment.cpp 70>seeds.cpp 70>run_length_morphology.cpp 70>selectivesearchsegmentation.cpp 70>slic.cpp 75>rotationConverters.cpp 74>cmake_pch.cxx 70>sparse_match_interpolators.cpp 70>structured_edge_detection.cpp 78>cmake_pch.cxx 70>thinning.cpp 70>weighted_median_filter.cpp 77>cmake_pch.cxx 71> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudabgsegm4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudabgsegm4110.exp 71>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 61>boostdesc.cpp 71>opencv_cudabgsegm.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudabgsegm4110.dll 74>opencv_wechat_qrcode_main.cpp 69>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_imgcodecs4110.lib” 74>binarizermgr.cpp 74>decodermgr.cpp 74>align.cpp 74>ssd_detector.cpp 74>imgsource.cpp 74>super_scale.cpp 74>wechat_qrcode.cpp 74>binarizer.cpp 74>binarybitmap.cpp 74>adaptive_threshold_mean_binarizer.cpp 74>fast_window_binarizer.cpp 74>global_histogram_binarizer.cpp 74>hybrid_binarizer.cpp 74>simple_adaptive_binarizer.cpp 74>bitarray.cpp 70>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_imgcodecs4110.lib” 72> 正在创建库 E:/opencv-build/build/lib/Release/opencv_bgsegm4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_bgsegm4110.exp 74>bitmatrix.cpp 74>bitsource.cpp 74>bytematrix.cpp 74>characterseteci.cpp 74>decoder_result.cpp 69>已完成生成项目“opencv_datasets.vcxproj”的操作 - 失败。 79>------ 已启动生成: 项目: opencv_tracking, 配置: Release x64 ------ 70>已完成生成项目“opencv_ximgproc.vcxproj”的操作 - 失败。 71>已完成生成项目“opencv_cudabgsegm.vcxproj”的操作。 80>------ 已启动生成: 项目: opencv_optflow, 配置: Release x64 ------ 74>detector_result.cpp 74>greyscale_luminance_source.cpp 74>greyscale_rotated_luminance_source.cpp 74>grid_sampler.cpp 74>imagecut.cpp 74>kmeans.cpp 74>perspective_transform.cpp 74>genericgf.cpp 74>genericgfpoly.cpp 74>reed_solomon_decoder.cpp 74>str.cpp 74>stringutils.cpp 74>unicomblock.cpp 74>errorhandler.cpp 74>luminance_source.cpp 74>bitmatrixparser.cpp 61>logos.cpp 74>datablock.cpp 78>opencv_aruco_main.cpp 74>datamask.cpp 74>decoded_bit_stream_parser.cpp 78>aruco.cpp 74>decoder.cpp 78>aruco_calib.cpp 74>mode.cpp 78>charuco.cpp 74>alignment_pattern.cpp 74>alignment_pattern_finder.cpp 76>opencv_face_main.cpp 74>detector.cpp 76>bif.cpp 74>finder_pattern.cpp 74>finder_pattern_finder.cpp 76>eigen_faces.cpp 74>finder_pattern_info.cpp 74>pattern_result.cpp 76>face_alignment.cpp 74>error_correction_level.cpp 74>format_information.cpp 76>face_basic.cpp 76>facemark.cpp 76>facemarkAAM.cpp 76>facemarkLBF.cpp 76>facerec.cpp 76>fisher_faces.cpp 76>getlandmarks.cpp 74>qrcode_reader.cpp 74>version.cpp 76>lbph_faces.cpp 76>mace.cpp 76>predict_collector.cpp 74>reader.cpp 74>result.cpp 76>regtree.cpp 74>resultpoint.cpp 80>cmake_pch.cxx 76>trainFacemark.cpp 72>opencv_bgsegm.vcxproj -> E:\opencv-build\build\bin\Release\opencv_bgsegm4110.dll 75>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_highgui4110.lib” 73>opencv_xobjdetect_main.cpp 75>已完成生成项目“opencv_interactive-calibration.vcxproj”的操作 - 失败。 73>feature_evaluator.cpp 73>lbpfeatures.cpp 73>waldboost.cpp 79>cmake_pch.cxx 73>wbdetector.cpp 61>Logos.cpp 77>opencv_cudalegacy_main.cpp 77>NCV.cpp 77>bm.cpp 77>bm_fast.cpp 77>calib3d.cpp 77>fgd.cpp 77>gmg.cpp 77>graphcuts.cpp 77>image_pyramid.cpp 77>interpolate_frames.cpp 77>needle_map.cpp 61>vgg.cpp 78> 正在创建库 E:/opencv-build/build/lib/Release/opencv_aruco4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_aruco4110.exp 73>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_imgcodecs4110.lib” 73>已完成生成项目“opencv_xobjdetect.vcxproj”的操作 - 失败。 81>------ 已启动生成: 项目: opencv_waldboost_detector, 配置: Release x64 ------ 78>opencv_aruco.vcxproj -> E:\opencv-build\build\bin\Release\opencv_aruco4110.dll 81>waldboost_detector.cpp 77> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudalegacy4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudalegacy4110.exp 80>opencl_kernels_optflow.cpp 80>opencv_optflow_main.cpp 80>deepflow.cpp 80>interfaces.cpp 80>motempl.cpp 80>pcaflow.cpp 80>geo_interpolation.cpp 80>rlof_localflow.cpp 80>rlofflow.cpp 77>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 80>simpleflow.cpp 80>sparse_matching_gpc.cpp 80>sparsetodenseflow.cpp 80>tvl1flow.cpp 76> 正在创建库 E:/opencv-build/build/lib/Release/opencv_face4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_face4110.exp 74> 正在创建库 E:/opencv-build/build/lib/Release/opencv_wechat_qrcode4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_wechat_qrcode4110.exp 77>opencv_cudalegacy.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudalegacy4110.dll 77>已完成生成项目“opencv_cudalegacy.vcxproj”的操作。 82>------ 已启动生成: 项目: opencv_cudaobjdetect, 配置: Release x64 ------ 79>opencl_kernels_tracking.cpp 79>opencv_tracking_main.cpp 79>augmented_unscented_kalman.cpp 79>feature.cpp 79>featureColorName.cpp 79>gtrUtils.cpp 79>kuhn_munkres.cpp 79>mosseTracker.cpp 79>multiTracker.cpp 79>multiTracker_alt.cpp 79>onlineBoosting.cpp 79>tldDataset.cpp 79>tldDetector.cpp 79>tldEnsembleClassifier.cpp 79>tldModel.cpp 79>tldTracker.cpp 79>tldUtils.cpp 79>tracker.cpp 74>opencv_wechat_qrcode.vcxproj -> E:\opencv-build\build\bin\Release\opencv_wechat_qrcode4110.dll 76>opencv_face.vcxproj -> E:\opencv-build\build\bin\Release\opencv_face4110.dll 79>trackerBoosting.cpp 79>trackerBoostingModel.cpp 79>trackerCSRT.cpp 79>trackerCSRTScaleEstimation.cpp 79>trackerCSRTSegmentation.cpp 79>trackerCSRTUtils.cpp 79>trackerFeature.cpp 81>LINK : fatal error LNK1181: 无法打开输入文件“..\..\..\..\lib\Release\opencv_highgui4110.lib” 79>trackerFeatureSet.cpp 79>trackerKCF.cpp 79>trackerMIL_legacy.cpp 79>trackerMedianFlow.cpp 79>trackerSampler.cpp 81>已完成生成项目“opencv_waldboost_detector.vcxproj”的操作 - 失败。 79>trackerSamplerAlgorithm.cpp 79>trackerStateEstimator.cpp 79>tracking_by_matching.cpp 79>tracking_utils.cpp 79>twist.cpp 79>unscented_kalman.cpp 61> 正在创建库 E:/opencv-build/build/lib/Release/opencv_xfeatures2d4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_xfeatures2d4110.exp 61>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 80>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_ximgproc4110.lib” 80>已完成生成项目“opencv_optflow.vcxproj”的操作 - 失败。 83>------ 已启动生成: 项目: opencv_cudaoptflow, 配置: Release x64 ------ 61>opencv_xfeatures2d.vcxproj -> E:\opencv-build\build\bin\Release\opencv_xfeatures2d4110.dll 61>已完成生成项目“opencv_xfeatures2d.vcxproj”的操作。 84>------ 已启动生成: 项目: opencv_stitching, 配置: Release x64 ------ 82>cmake_pch.cxx 83>cmake_pch.cxx 84>cmake_pch.cxx 79>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_datasets4110.lib” 79>已完成生成项目“opencv_tracking.vcxproj”的操作 - 失败。 85>------ 已启动生成: 项目: opencv_stereo, 配置: Release x64 ------ 85>cmake_pch.cxx 83>brox.cpp 83>farneback.cpp 83>nvidiaOpticalFlow.cpp 83>pyrlk.cpp 83>tvl1flow.cpp 83>opencv_cudaoptflow_main.cpp 83>E:\opencv-build\opencv_contrib\modules\cudaoptflow\src\nvidiaOpticalFlow.cpp(52,10): error C1083: 无法打开包括文件: “nvOpticalFlowCuda.h”: No such file or directory 83>(编译源文件“../../../opencv_contrib/modules/cudaoptflow/src/nvidiaOpticalFlow.cpp”) 82>opencv_cudaobjdetect_main.cpp 82>cascadeclassifier.cpp 82>hog.cpp 83>已完成生成项目“opencv_cudaoptflow.vcxproj”的操作 - 失败。 86>------ 已启动生成: 项目: opencv_videostab, 配置: Release x64 ------ 87>------ 已启动生成: 项目: opencv_superres, 配置: Release x64 ------ 85>opencv_stereo_main.cpp 85>descriptor.cpp 85>quasi_dense_stereo.cpp 85>stereo_binary_bm.cpp 85>stereo_binary_sgbm.cpp 86>cmake_pch.cxx 87>cmake_pch.cxx 84>opencl_kernels_stitching.cpp 84>opencv_stitching_main.cpp 84>autocalib.cpp 84>blenders.cpp 84>camera.cpp 84>exposure_compensate.cpp 84>matchers.cpp 84>motion_estimators.cpp 84>seam_finders.cpp 84>stitcher.cpp 84>timelapsers.cpp 84>util.cpp 84>warpers.cpp 84>warpers_cuda.cpp 85>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_tracking4110.lib” 85>已完成生成项目“opencv_stereo.vcxproj”的操作 - 失败。 82> 正在创建库 E:/opencv-build/build/lib/Release/opencv_cudaobjdetect4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_cudaobjdetect4110.exp 82>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 82>opencv_cudaobjdetect.vcxproj -> E:\opencv-build\build\bin\Release\opencv_cudaobjdetect4110.dll 82>已完成生成项目“opencv_cudaobjdetect.vcxproj”的操作。 86>opencv_videostab_main.cpp 86>deblurring.cpp 86>fast_marching.cpp 86>frame_source.cpp 86>global_motion.cpp 86>inpainting.cpp 86>log.cpp 86>motion_stabilizing.cpp 86>optical_flow.cpp 86>outlier_rejection.cpp 86>stabilizer.cpp 86>wobble_suppression.cpp 84> 正在创建库 E:/opencv-build/build/lib/Release/opencv_stitching4110.lib 和对象 E:/opencv-build/build/lib/Release/opencv_stitching4110.exp 84>LINK : warning LNK4098: 默认库“LIBCMT”与其他库的使用冲突;请使用 /NODEFAULTLIB:library 87>opencl_kernels_superres.cpp 87>opencv_superres_main.cpp 87>btv_l1.cpp 87>btv_l1_cuda.cpp 87>frame_source.cpp 87>input_array_utility.cpp 87>optical_flow.cpp 87>super_resolution.cpp 84>opencv_stitching.vcxproj -> E:\opencv-build\build\bin\Release\opencv_stitching4110.dll 84>已完成生成项目“opencv_stitching.vcxproj”的操作。 87>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_cudacodec4110.lib” 87>已完成生成项目“opencv_superres.vcxproj”的操作 - 失败。 86>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_videoio4110.lib” 86>已完成生成项目“opencv_videostab.vcxproj”的操作 - 失败。 88>------ 已启动生成: 项目: opencv_python3, 配置: Release x64 ------ 88>LINK : fatal error LNK1181: 无法打开输入文件“..\..\lib\Release\opencv_xobjdetect4110.lib” 88>已完成生成项目“opencv_python3.vcxproj”的操作 - 失败。 89>------ 已启动生成: 项目: INSTALL, 配置: Release x64 ------ 89>1> 89>-- Install configuration: "Release" 89>CMake Error at cmake_install.cmake:36 (file): 89> file INSTALL cannot find 89> "E:/opencv-build/build/3rdparty/ippicv/ippicv_win/icv/readme.htm": No 89> error. 89> 89> 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: 命令“setlocal 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: D:\CMake\bin\cmake.exe -DBUILD_TYPE=Release -P cmake_install.cmake 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: if %errorlevel% neq 0 goto :cmEnd 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: :cmEnd 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: endlocal & call :cmErrorLevel %errorlevel% & goto :cmDone 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: :cmErrorLevel 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: exit /b %1 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: :cmDone 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: if %errorlevel% neq 0 goto :VCEnd 89>D:\Visual Studio\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(166,5): error MSB3073: :VCEnd”已退出,代码为 1。 89>已完成生成项目“INSTALL.vcxproj”的操作 - 失败。 ========== 生成: 67 成功,22 失败,15 最新,0 已跳过 ========== ========== 生成 于 22:55 完成,耗时 03:12.593 分钟 ==========

手机看
程序员都在用的中文IT技术交流社区

程序员都在用的中文IT技术交流社区

专业的中文 IT 技术社区,与千万技术人共成长

专业的中文 IT 技术社区,与千万技术人共成长

关注【CSDN】视频号,行业资讯、技术分享精彩不断,直播好礼送不停!

关注【CSDN】视频号,行业资讯、技术分享精彩不断,直播好礼送不停!

客服 返回
顶部