file-type

GitHub Actions实现用户仓库统计可视化

ZIP文件

下载需积分: 50 | 27KB | 更新于2025-09-05 | 18 浏览量 | 0 下载量 举报 收藏
download 立即下载
### 知识点解析 #### 标题解析:"profile-action-stats" - **GitHub Actions**: GitHub Actions是GitHub平台提供的一个功能,允许开发者自动化软件开发工作流程。通过GitHub Actions,用户可以创建自定义的软件开发周期事件处理程序,例如代码提交、合并请求或者定时任务等。 - **生成统计信息的可视化**: 这指的是将数据进行图形化展示的过程。在GitHub的上下文中,这可能意味着用图表来展示用户或存储库的相关数据,例如关注者数量、星标数、分支和提交数等。 #### 描述解析:"使用GitHub Actions生成GitHub用户和存储库统计信息的可视化" - **背景**: 描述了GitHub用户资料页面的局限性,即现有的星标、叉标和关注者数量不能完全反映一个用户对开源社区的贡献,尤其是私有存储库的贡献以及历史贡献情况。 - **项目目标**: 项目通过GitHub Actions结合GitHub API来收集用户配置文件和存储库的相关统计信息。收集到的数据将用来生成可视化的图表,这些图表可以展示在用户的个人资料或存储库页面上。 - **实现方式**: 利用GitHub Actions的运行机制,不需要独立服务器即可定期更新统计数据,并且生成新的图像。GitHub Actions作为自动化服务,在指定的事件触发时自动执行定义的工作流程,可以包含数据收集、处理和更新视图等操作。 - **用户权限**: 如果项目使用具有足够权限读取私有存储库的GitHub访问令牌,则可以访问并分析私有存储库的数据。这要求用户在自己的账户上配置并运行分析代码,而不是由第三方运行。 #### 标签解析:"Python" - **Python**: Python是一种广泛使用的高级编程语言,它支持多种编程范式,包括面向对象、命令式、函数式和过程式编程。在数据科学、自动化脚本、网络应用开发等领域,Python是一个流行的选择。在本项目中,Python很可能被用于与GitHub API通信、处理数据以及生成统计信息的可视化图表。 #### 压缩包子文件的文件名称列表:"profile-action-stats-master" - **profile-action-stats-master**: 这个文件名称暗示了该项目的源代码或主要工作流文件可能被组织在名为“profile-action-stats”的主目录下的“master”分支中。在GitHub的版本控制系统中,"master"分支通常用来存储主开发线路的代码,而“profile-action-stats”目录可能包含了该项目的核心文件和脚本。 ### 总结 综上所述,该项目"profile-action-stats"是一个旨在通过GitHub Actions自动化收集和可视化GitHub用户及其存储库的统计信息的工具。它利用GitHub API来提取数据,利用GitHub Actions作为运行平台,实现自动化统计信息的更新和展示。该工具还允许访问私有存储库的数据,前提是使用具备相应权限的GitHub访问令牌。Python语言的使用预示了该项目在数据处理和图表生成方面的技术栈。通过这个工具,用户能够更全面地展示自己对开源社区的贡献,包括那些不易被直接观察到的私有贡献和历史贡献。

相关推荐

filetype

#!/usr/bin/env python3 # Copyright (C) 2017 The Android Open Source Project # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse import atexit import hashlib import os import shutil import signal import subprocess import sys import tempfile import time import uuid import platform TRACE_TO_TEXT_SHAS = { 'linux': '7e3e10dfb324e31723efd63ac25037856e06eba0', 'mac': '21f0f42dd019b4f09addd404a114fbf2322ca8a4', } TRACE_TO_TEXT_PATH = tempfile.gettempdir() TRACE_TO_TEXT_BASE_URL = ('https://storage.googleapis.com/perfetto/') NULL = open(os.devnull) NOOUT = { 'stdout': NULL, 'stderr': NULL, } UUID = str(uuid.uuid4())[-6:] def check_hash(file_name, sha_value): file_hash = hashlib.sha1() with open(file_name, 'rb') as fd: while True: chunk = fd.read(4096) if not chunk: break file_hash.update(chunk) return file_hash.hexdigest() == sha_value def load_trace_to_text(os_name): sha_value = TRACE_TO_TEXT_SHAS[os_name] file_name = 'trace_to_text-' + os_name + '-' + sha_value local_file = os.path.join(TRACE_TO_TEXT_PATH, file_name) if os.path.exists(local_file): if not check_hash(local_file, sha_value): os.remove(local_file) else: return local_file url = TRACE_TO_TEXT_BASE_URL + file_name subprocess.check_call(['curl', '-L', '-#', '-o', local_file, url]) if not check_hash(local_file, sha_value): os.remove(local_file) raise ValueError("Invalid signature.") os.chmod(local_file, 0o755) return local_file PACKAGES_LIST_CFG = '''data_sources { config { name: "android.packages_list" } } ''' CFG_INDENT = ' ' CFG = '''buffers {{ size_kb: 63488 }} data_sources {{ config {{ name: "android.heapprofd" heapprofd_config {{ shmem_size_bytes: {shmem_size} sampling_interval_bytes: {interval} {target_cfg} }} }} }} duration_ms: {duration} write_into_file: true flush_timeout_ms: 30000 flush_period_ms: 604800000 ''' # flush_period_ms of 1 week to suppress trace_processor_shell warning. CONTINUOUS_DUMP = """ continuous_dump_config {{ dump_phase_ms: 0 dump_interval_ms: {dump_interval} }} """ PROFILE_LOCAL_PATH = os.path.join(tempfile.gettempdir(), UUID) IS_INTERRUPTED = False def sigint_handler(sig, frame): global IS_INTERRUPTED IS_INTERRUPTED = True def print_no_profile_error(): print("No profiles generated", file=sys.stderr) print( "If this is unexpected, check " "https://perfetto.dev/docs/data-sources/native-heap-profiler#troubleshooting.", file=sys.stderr) def known_issues_url(number): return ('https://perfetto.dev/docs/data-sources/native-heap-profiler' '#known-issues-android{}'.format(number)) KNOWN_ISSUES = { '10': known_issues_url(10), 'Q': known_issues_url(10), '11': known_issues_url(11), 'R': known_issues_url(11), } def maybe_known_issues(): release_or_codename = subprocess.check_output( ['adb', 'shell', 'getprop', 'ro.build.version.release_or_codename'] ).decode('utf-8').strip() return KNOWN_ISSUES.get(release_or_codename, None) SDK = { 'R': 30, } def release_or_newer(release): sdk = int(subprocess.check_output( ['adb', 'shell', 'getprop', 'ro.system.build.version.sdk'] ).decode('utf-8').strip()) if sdk >= SDK[release]: return True codename = subprocess.check_output( ['adb', 'shell', 'getprop', 'ro.build.version.codename'] ).decode('utf-8').strip() return codename == release def main(argv): parser = argparse.ArgumentParser() parser.add_argument( "-i", "--interval", help="Sampling interval. " "Default 4096 (4KiB)", type=int, default=4096) parser.add_argument( "-d", "--duration", help="Duration of profile (ms). 0 to run until interrupted. " "Default: until interrupted by user.", type=int, default=0) # This flag is a no-op now. We never start heapprofd explicitly using system # properties. parser.add_argument( "--no-start", help="Do not start heapprofd.", action='store_true') parser.add_argument( "-p", "--pid", help="Comma-separated list of PIDs to " "profile.", metavar="PIDS") parser.add_argument( "-n", "--name", help="Comma-separated list of process " "names to profile.", metavar="NAMES") parser.add_argument( "-f", "--functions", help="Comma-separated list of functions " "names to profile.", metavar="FUNCTIONS") parser.add_argument( "-c", "--continuous-dump", help="Dump interval in ms. 0 to disable continuous dump.", type=int, default=0) parser.add_argument( "--heaps", help="Comma-separated list of heaps to collect, e.g: malloc,art. " "Requires Android 12.", metavar="HEAPS") parser.add_argument( "--all-heaps", action="store_true", help="Collect allocations from all heaps registered by target." ) parser.add_argument( "--no-android-tree-symbolization", action="store_true", help="Do not symbolize using currently lunched target in the " "Android tree." ) parser.add_argument( "--disable-selinux", action="store_true", help="Disable SELinux enforcement for duration of " "profile.") parser.add_argument( "--no-versions", action="store_true", help="Do not get version information about APKs.") parser.add_argument( "--no-running", action="store_true", help="Do not target already running processes. Requires Android 11.") parser.add_argument( "--no-startup", action="store_true", help="Do not target processes that start during " "the profile. Requires Android 11.") parser.add_argument( "--shmem-size", help="Size of buffer between client and " "heapprofd. Default 8MiB. Needs to be a power of two " "multiple of 4096, at least 8192.", type=int, default=8 * 1048576) parser.add_argument( "--block-client", help="When buffer is full, block the " "client to wait for buffer space. Use with caution as " "this can significantly slow down the client. " "This is the default", action="store_true") parser.add_argument( "--block-client-timeout", help="If --block-client is given, do not block any allocation for " "longer than this timeout (us).", type=int) parser.add_argument( "--no-block-client", help="When buffer is full, stop the " "profile early.", action="store_true") parser.add_argument( "--idle-allocations", help="Keep track of how many " "bytes were unused since the last dump, per " "callstack", action="store_true") parser.add_argument( "--dump-at-max", help="Dump the maximum memory usage " "rather than at the time of the dump.", action="store_true") parser.add_argument( "--disable-fork-teardown", help="Do not tear down client in forks. This can be useful for programs " "that use vfork. Android 11+ only.", action="store_true") parser.add_argument( "--simpleperf", action="store_true", help="Get simpleperf profile of heapprofd. This is " "only for heapprofd development.") parser.add_argument( "--trace-to-text-binary", help="Path to local trace to text. For debugging.") parser.add_argument( "--print-config", action="store_true", help="Print config instead of running. For debugging.") parser.add_argument( "-o", "--output", help="Output directory.", metavar="DIRECTORY", default=None) args = parser.parse_args() fail = False if args.block_client and args.no_block_client: print( "FATAL: Both block-client and no-block-client given.", file=sys.stderr) fail = True if args.pid is None and args.name is None: print("FATAL: Neither PID nor NAME given.", file=sys.stderr) fail = True if args.duration is None: print("FATAL: No duration given.", file=sys.stderr) fail = True if args.interval is None: print("FATAL: No interval given.", file=sys.stderr) fail = True if args.shmem_size % 4096: print("FATAL: shmem-size is not a multiple of 4096.", file=sys.stderr) fail = True if args.shmem_size < 8192: print("FATAL: shmem-size is less than 8192.", file=sys.stderr) fail = True if args.shmem_size & (args.shmem_size - 1): print("FATAL: shmem-size is not a power of two.", file=sys.stderr) fail = True target_cfg = "" if not args.no_block_client: target_cfg += CFG_INDENT + "block_client: true\n" if args.block_client_timeout: target_cfg += ( CFG_INDENT + "block_client_timeout_us: %s\n" % args.block_client_timeout ) if args.no_startup: target_cfg += CFG_INDENT + "no_startup: true\n" if args.no_running: target_cfg += CFG_INDENT + "no_running: true\n" if args.dump_at_max: target_cfg += CFG_INDENT + "dump_at_max: true\n" if args.disable_fork_teardown: target_cfg += CFG_INDENT + "disable_fork_teardown: true\n" if args.all_heaps: target_cfg += CFG_INDENT + "all_heaps: true\n" if args.pid: for pid in args.pid.split(','): try: pid = int(pid) except ValueError: print("FATAL: invalid PID %s" % pid, file=sys.stderr) fail = True target_cfg += CFG_INDENT + 'pid: {}\n'.format(pid) if args.name: for name in args.name.split(','): target_cfg += CFG_INDENT + 'process_cmdline: "{}"\n'.format(name) if args.heaps: for heap in args.heaps.split(','): target_cfg += CFG_INDENT + 'heaps: "{}"\n'.format(heap) if args.functions: for functions in args.functions.split(','): target_cfg += CFG_INDENT + 'function_names: "{}"\n'.format(functions) if fail: parser.print_help() return 1 trace_to_text_binary = args.trace_to_text_binary if args.continuous_dump: target_cfg += CONTINUOUS_DUMP.format(dump_interval=args.continuous_dump) cfg = CFG.format( interval=args.interval, duration=args.duration, target_cfg=target_cfg, shmem_size=args.shmem_size) if not args.no_versions: cfg += PACKAGES_LIST_CFG if args.print_config: print(cfg) return 0 # Do this AFTER print_config so we do not download trace_to_text only to # print out the config. has_trace_to_text = True if trace_to_text_binary is None: os_name = None if sys.platform.startswith('linux'): os_name = 'linux' elif sys.platform.startswith('darwin'): os_name = 'mac' elif sys.platform.startswith('win32'): has_trace_to_text = False else: print("Invalid platform: {}".format(sys.platform), file=sys.stderr) return 1 arch = platform.machine() if arch not in ['x86_64', 'amd64']: has_trace_to_text = False if has_trace_to_text: trace_to_text_binary = load_trace_to_text(os_name) known_issues = maybe_known_issues() if known_issues: print('If you are experiencing problems, please see the known issues for ' 'your release: {}.'.format(known_issues)) # TODO(fmayer): Maybe feature detect whether we can remove traces instead of # this. uuid_trace = release_or_newer('R') if uuid_trace: profile_device_path = '/data/misc/perfetto-traces/profile-' + UUID else: user = subprocess.check_output( ['adb', 'shell', 'whoami']).decode('utf-8').strip() profile_device_path = '/data/misc/perfetto-traces/profile-' + user perfetto_cmd = ('CFG=\'{cfg}\'; echo ${{CFG}} | ' 'perfetto --txt -c - -o ' + profile_device_path + ' -d') if args.disable_selinux: enforcing = subprocess.check_output(['adb', 'shell', 'getenforce']) atexit.register( subprocess.check_call, ['adb', 'shell', 'su root setenforce %s' % enforcing]) subprocess.check_call(['adb', 'shell', 'su root setenforce 0']) if args.simpleperf: subprocess.check_call([ 'adb', 'shell', 'mkdir -p /data/local/tmp/heapprofd_profile && ' 'cd /data/local/tmp/heapprofd_profile &&' '(nohup simpleperf record -g -p $(pidof heapprofd) 2>&1 &) ' '> /dev/null' ]) profile_target = PROFILE_LOCAL_PATH if args.output is not None: profile_target = args.output else: os.mkdir(profile_target) if not os.path.isdir(profile_target): print("Output directory {} not found".format(profile_target), file=sys.stderr) return 1 if os.listdir(profile_target): print("Output directory {} not empty".format(profile_target), file=sys.stderr) return 1 perfetto_pid = subprocess.check_output( ['adb', 'exec-out', perfetto_cmd.format(cfg=cfg)]).strip() try: perfetto_pid = int(perfetto_pid.strip()) except ValueError: print("Failed to invoke perfetto: {}".format(perfetto_pid), file=sys.stderr) return 1 old_handler = signal.signal(signal.SIGINT, sigint_handler) print("Profiling active. Press Ctrl+C to terminate.") print("You may disconnect your device.") print() exists = True device_connected = True while not device_connected or (exists and not IS_INTERRUPTED): exists = subprocess.call( ['adb', 'shell', '[ -d /proc/{} ]'.format(perfetto_pid)], **NOOUT) == 0 device_connected = subprocess.call(['adb', 'shell', 'true'], **NOOUT) == 0 time.sleep(1) print("Waiting for profiler shutdown...") signal.signal(signal.SIGINT, old_handler) if IS_INTERRUPTED: # Not check_call because it could have existed in the meantime. subprocess.call(['adb', 'shell', 'kill', '-INT', str(perfetto_pid)]) if args.simpleperf: subprocess.check_call(['adb', 'shell', 'killall', '-INT', 'simpleperf']) print("Waiting for simpleperf to exit.") while subprocess.call( ['adb', 'shell', '[ -f /proc/$(pidof simpleperf)/exe ]'], **NOOUT) == 0: time.sleep(1) subprocess.check_call( ['adb', 'pull', '/data/local/tmp/heapprofd_profile', profile_target]) print( "Pulled simpleperf profile to " + profile_target + "/heapprofd_profile") # Wait for perfetto cmd to return. while exists: exists = subprocess.call( ['adb', 'shell', '[ -d /proc/{} ]'.format(perfetto_pid)]) == 0 time.sleep(1) profile_host_path = os.path.join(profile_target, 'raw-trace') subprocess.check_call( ['adb', 'pull', profile_device_path, profile_host_path], stdout=NULL) if uuid_trace: subprocess.check_call( ['adb', 'shell', 'rm', profile_device_path], stdout=NULL) if not has_trace_to_text: print('Wrote profile to {}'.format(profile_host_path)) print('This file can be opened using the Perfetto UI, https://ui.perfetto.dev') return 0 binary_path = os.getenv('PERFETTO_BINARY_PATH') if not args.no_android_tree_symbolization: product_out = os.getenv('ANDROID_PRODUCT_OUT') if product_out: product_out_symbols = product_out + '/symbols' else: product_out_symbols = None if binary_path is None: binary_path = product_out_symbols elif product_out_symbols is not None: binary_path += ":" + product_out_symbols trace_file = os.path.join(profile_target, 'raw-trace') concat_files = [trace_file] if binary_path is not None: with open(os.path.join(profile_target, 'symbols'), 'w') as fd: ret = subprocess.call([ trace_to_text_binary, 'symbolize', os.path.join(profile_target, 'raw-trace')], env=dict(os.environ, PERFETTO_BINARY_PATH=binary_path), stdout=fd) if ret == 0: concat_files.append(os.path.join(profile_target, 'symbols')) else: print("Failed to symbolize. Continuing without symbols.", file=sys.stderr) proguard_map = os.getenv('PERFETTO_PROGUARD_MAP') if proguard_map is not None: with open(os.path.join(profile_target, 'deobfuscation-packets'), 'w') as fd: ret = subprocess.call([ trace_to_text_binary, 'deobfuscate', os.path.join(profile_target, 'raw-trace')], env=dict(os.environ, PERFETTO_PROGUARD_MAP=proguard_map), stdout=fd) if ret == 0: concat_files.append( os.path.join(profile_target, 'deobfuscation-packets')) else: print("Failed to deobfuscate. Continuing without deobfuscated.", file=sys.stderr) if len(concat_files) > 1: with open(os.path.join(profile_target, 'symbolized-trace'), 'wb') as out: for fn in concat_files: with open(fn, 'rb') as inp: while True: buf = inp.read(4096) if not buf: break out.write(buf) trace_file = os.path.join(profile_target, 'symbolized-trace') trace_to_text_output = subprocess.check_output( [trace_to_text_binary, 'profile', trace_file]) profile_path = None print('caifc trace_file ' + str(trace_file)) print('caifc trace_to_text_output ' + str(trace_to_text_output)) for word in trace_to_text_output.decode('utf-8').split(): if 'heap_profile-' in word: profile_path = word if profile_path is None: print_no_profile_error() return 1 profile_files = os.listdir(profile_path) if not profile_files: print_no_profile_error() return 1 for profile_file in profile_files: shutil.copy(os.path.join(profile_path, profile_file), profile_target) subprocess.check_call( ['gzip'] + [os.path.join(profile_target, x) for x in profile_files]) symlink_path = None if args.output is None: symlink_path = os.path.join( os.path.dirname(profile_target), "heap_profile-latest") if os.path.lexists(symlink_path): os.unlink(symlink_path) os.symlink(profile_target, symlink_path) if symlink_path is not None: print("Wrote profiles to {} (symlink {})".format( profile_target, symlink_path)) else: print("Wrote profiles to {}".format(profile_target)) print("These can be viewed using pprof. Googlers: head to pprof/ and " "upload them.") if __name__ == '__main__': sys.exit(main(sys.argv))

filetype

D:\IDEA\jdk11\jdk1.8.0_451\bin\java.exe "-javaagent:D:\IDEA\IntelliJ IDEA 2024.2.4\lib\idea_rt.jar=51460:D:\IDEA\IntelliJ IDEA 2024.2.4\bin" -Dfile.encoding=UTF-8 -classpath D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\charsets.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\deploy.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\access-bridge-32.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\access-bridge.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\cldrdata.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\dnsns.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\jaccess.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\localedata.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\nashorn.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\sunec.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\sunjce_provider.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\sunmscapi.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\sunpkcs11.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\ext\zipfs.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\javaws.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\jce.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\jfr.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\jsse.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\management-agent.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\plugin.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\resources.jar;D:\IDEA\jdk11\jdk1.8.0_451\jre\lib\rt.jar;E:\AliCloud\rulin-blog\pblog\target\classes;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-web\2.4.1\spring-boot-starter-web-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter\2.4.1\spring-boot-starter-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot\2.4.1\spring-boot-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-logging\2.4.1\spring-boot-starter-logging-2.4.1.jar;C:\Users\ASUS\.m2\repository\ch\qos\logback\logback-classic\1.2.3\logback-classic-1.2.3.jar;C:\Users\ASUS\.m2\repository\ch\qos\logback\logback-core\1.2.3\logback-core-1.2.3.jar;C:\Users\ASUS\.m2\repository\org\apache\logging\log4j\log4j-to-slf4j\2.13.3\log4j-to-slf4j-2.13.3.jar;C:\Users\ASUS\.m2\repository\org\apache\logging\log4j\log4j-api\2.13.3\log4j-api-2.13.3.jar;C:\Users\ASUS\.m2\repository\jakarta\annotation\jakarta.annotation-api\1.3.5\jakarta.annotation-api-1.3.5.jar;C:\Users\ASUS\.m2\repository\org\yaml\snakeyaml\1.27\snakeyaml-1.27.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-json\2.4.1\spring-boot-starter-json-2.4.1.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\core\jackson-databind\2.11.3\jackson-databind-2.11.3.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\core\jackson-annotations\2.11.3\jackson-annotations-2.11.3.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jdk8\2.11.3\jackson-datatype-jdk8-2.11.3.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jsr310\2.11.3\jackson-datatype-jsr310-2.11.3.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\module\jackson-module-parameter-names\2.11.3\jackson-module-parameter-names-2.11.3.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-tomcat\2.4.1\spring-boot-starter-tomcat-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\apache\tomcat\embed\tomcat-embed-core\9.0.41\tomcat-embed-core-9.0.41.jar;C:\Users\ASUS\.m2\repository\org\apache\tomcat\embed\tomcat-embed-websocket\9.0.41\tomcat-embed-websocket-9.0.41.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-web\5.3.2\spring-web-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-beans\5.3.2\spring-beans-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-webmvc\5.3.2\spring-webmvc-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-context\5.3.2\spring-context-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-expression\5.3.2\spring-expression-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-log4j\1.3.8.RELEASE\spring-boot-starter-log4j-1.3.8.RELEASE.jar;C:\Users\ASUS\.m2\repository\org\slf4j\jcl-over-slf4j\1.7.30\jcl-over-slf4j-1.7.30.jar;C:\Users\ASUS\.m2\repository\org\slf4j\jul-to-slf4j\1.7.30\jul-to-slf4j-1.7.30.jar;C:\Users\ASUS\.m2\repository\org\slf4j\slf4j-log4j12\1.7.30\slf4j-log4j12-1.7.30.jar;C:\Users\ASUS\.m2\repository\cn\dev33\sa-token-spring-boot-starter\1.29.0\sa-token-spring-boot-starter-1.29.0.jar;C:\Users\ASUS\.m2\repository\cn\dev33\sa-token-servlet\1.29.0\sa-token-servlet-1.29.0.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-validation\2.4.1\spring-boot-starter-validation-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\glassfish\jakarta.el\3.0.3\jakarta.el-3.0.3.jar;C:\Users\ASUS\.m2\repository\org\hibernate\validator\hibernate-validator\6.1.6.Final\hibernate-validator-6.1.6.Final.jar;C:\Users\ASUS\.m2\repository\jakarta\validation\jakarta.validation-api\2.0.2\jakarta.validation-api-2.0.2.jar;C:\Users\ASUS\.m2\repository\org\jboss\logging\jboss-logging\3.4.1.Final\jboss-logging-3.4.1.Final.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\classmate\1.5.1\classmate-1.5.1.jar;C:\Users\ASUS\.m2\repository\net\bytebuddy\byte-buddy\1.10.18\byte-buddy-1.10.18.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-core\5.3.2\spring-core-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-jcl\5.3.2\spring-jcl-5.3.2.jar;C:\Users\ASUS\.m2\repository\mysql\mysql-connector-java\8.0.22\mysql-connector-java-8.0.22.jar;C:\Users\ASUS\.m2\repository\com\alibaba\druid-spring-boot-starter\1.1.22\druid-spring-boot-starter-1.1.22.jar;C:\Users\ASUS\.m2\repository\com\alibaba\druid\1.1.22\druid-1.1.22.jar;C:\Users\ASUS\.m2\repository\org\slf4j\slf4j-api\1.7.30\slf4j-api-1.7.30.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-autoconfigure\2.4.1\spring-boot-autoconfigure-2.4.1.jar;C:\Users\ASUS\.m2\repository\com\baomidou\mybatis-plus-boot-starter\3.3.1\mybatis-plus-boot-starter-3.3.1.jar;C:\Users\ASUS\.m2\repository\com\baomidou\mybatis-plus\3.3.1\mybatis-plus-3.3.1.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-jdbc\2.4.1\spring-boot-starter-jdbc-2.4.1.jar;C:\Users\ASUS\.m2\repository\com\zaxxer\HikariCP\3.4.5\HikariCP-3.4.5.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-jdbc\5.3.2\spring-jdbc-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\apache\velocity\velocity-engine-core\2.0\velocity-engine-core-2.0.jar;C:\Users\ASUS\.m2\repository\org\apache\commons\commons-lang3\3.11\commons-lang3-3.11.jar;C:\Users\ASUS\.m2\repository\com\baomidou\mybatis-plus-generator\3.3.2\mybatis-plus-generator-3.3.2.jar;C:\Users\ASUS\.m2\repository\com\baomidou\mybatis-plus-extension\3.3.2\mybatis-plus-extension-3.3.2.jar;C:\Users\ASUS\.m2\repository\com\baomidou\mybatis-plus-core\3.3.2\mybatis-plus-core-3.3.2.jar;C:\Users\ASUS\.m2\repository\com\baomidou\mybatis-plus-annotation\3.3.2\mybatis-plus-annotation-3.3.2.jar;C:\Users\ASUS\.m2\repository\com\github\jsqlparser\jsqlparser\3.1\jsqlparser-3.1.jar;C:\Users\ASUS\.m2\repository\org\mybatis\mybatis\3.5.4\mybatis-3.5.4.jar;C:\Users\ASUS\.m2\repository\org\mybatis\mybatis-spring\2.0.4\mybatis-spring-2.0.4.jar;C:\Users\ASUS\.m2\repository\org\projectlombok\lombok\1.18.16\lombok-1.18.16.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-aop\2.4.1\spring-boot-starter-aop-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-aop\5.3.2\spring-aop-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\aspectj\aspectjweaver\1.9.6\aspectjweaver-1.9.6.jar;C:\Users\ASUS\.m2\repository\com\alibaba\fastjson\2.0.7\fastjson-2.0.7.jar;C:\Users\ASUS\.m2\repository\com\alibaba\fastjson2\fastjson2-extension\2.0.7\fastjson2-extension-2.0.7.jar;C:\Users\ASUS\.m2\repository\com\alibaba\fastjson2\fastjson2\2.0.7\fastjson2-2.0.7.jar;C:\Users\ASUS\.m2\repository\com\github\xiaoymin\knife4j-spring-boot-starter\3.0.3\knife4j-spring-boot-starter-3.0.3.jar;C:\Users\ASUS\.m2\repository\com\github\xiaoymin\knife4j-spring-boot-autoconfigure\3.0.3\knife4j-spring-boot-autoconfigure-3.0.3.jar;C:\Users\ASUS\.m2\repository\com\github\xiaoymin\knife4j-spring\3.0.3\knife4j-spring-3.0.3.jar;C:\Users\ASUS\.m2\repository\com\github\xiaoymin\knife4j-annotations\3.0.3\knife4j-annotations-3.0.3.jar;C:\Users\ASUS\.m2\repository\io\swagger\swagger-annotations\1.5.22\swagger-annotations-1.5.22.jar;C:\Users\ASUS\.m2\repository\io\swagger\core\v3\swagger-annotations\2.1.2\swagger-annotations-2.1.2.jar;C:\Users\ASUS\.m2\repository\com\github\xiaoymin\knife4j-core\3.0.3\knife4j-core-3.0.3.jar;C:\Users\ASUS\.m2\repository\org\javassist\javassist\3.25.0-GA\javassist-3.25.0-GA.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-swagger2\3.0.0\springfox-swagger2-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-spi\3.0.0\springfox-spi-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-schema\3.0.0\springfox-schema-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-swagger-common\3.0.0\springfox-swagger-common-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-spring-web\3.0.0\springfox-spring-web-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\github\classgraph\classgraph\4.8.83\classgraph-4.8.83.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-spring-webflux\3.0.0\springfox-spring-webflux-3.0.0.jar;C:\Users\ASUS\.m2\repository\org\mapstruct\mapstruct\1.3.1.Final\mapstruct-1.3.1.Final.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-spring-webmvc\3.0.0\springfox-spring-webmvc-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-core\3.0.0\springfox-core-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-oas\3.0.0\springfox-oas-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\swagger\core\v3\swagger-models\2.1.2\swagger-models-2.1.2.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-bean-validators\3.0.0\springfox-bean-validators-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\swagger\swagger-models\1.5.22\swagger-models-1.5.22.jar;C:\Users\ASUS\.m2\repository\io\swagger\swagger-core\1.5.22\swagger-core-1.5.22.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\dataformat\jackson-dataformat-yaml\2.11.3\jackson-dataformat-yaml-2.11.3.jar;C:\Users\ASUS\.m2\repository\com\google\guava\guava\27.0.1-android\guava-27.0.1-android.jar;C:\Users\ASUS\.m2\repository\com\google\guava\failureaccess\1.0.1\failureaccess-1.0.1.jar;C:\Users\ASUS\.m2\repository\com\google\guava\listenablefuture\9999.0-empty-to-avoid-conflict-with-guava\listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar;C:\Users\ASUS\.m2\repository\com\google\code\findbugs\jsr305\3.0.2\jsr305-3.0.2.jar;C:\Users\ASUS\.m2\repository\org\checkerframework\checker-compat-qual\2.5.2\checker-compat-qual-2.5.2.jar;C:\Users\ASUS\.m2\repository\com\google\errorprone\error_prone_annotations\2.2.0\error_prone_annotations-2.2.0.jar;C:\Users\ASUS\.m2\repository\com\google\j2objc\j2objc-annotations\1.1\j2objc-annotations-1.1.jar;C:\Users\ASUS\.m2\repository\org\codehaus\mojo\animal-sniffer-annotations\1.17\animal-sniffer-annotations-1.17.jar;C:\Users\ASUS\.m2\repository\javax\validation\validation-api\2.0.1.Final\validation-api-2.0.1.Final.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-boot-starter\3.0.0\springfox-boot-starter-3.0.0.jar;C:\Users\ASUS\.m2\repository\io\springfox\springfox-data-rest\3.0.0\springfox-data-rest-3.0.0.jar;C:\Users\ASUS\.m2\repository\org\springframework\plugin\spring-plugin-core\2.0.0.RELEASE\spring-plugin-core-2.0.0.RELEASE.jar;C:\Users\ASUS\.m2\repository\org\springframework\plugin\spring-plugin-metadata\2.0.0.RELEASE\spring-plugin-metadata-2.0.0.RELEASE.jar;C:\Users\ASUS\.m2\repository\com\github\xiaoymin\knife4j-spring-ui\3.0.3\knife4j-spring-ui-3.0.3.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-configuration-processor\2.4.1\spring-boot-configuration-processor-2.4.1.jar;C:\Users\ASUS\.m2\repository\cn\dev33\sa-token-dao-redis-jackson\1.29.0\sa-token-dao-redis-jackson-1.29.0.jar;C:\Users\ASUS\.m2\repository\cn\dev33\sa-token-core\1.29.0\sa-token-core-1.29.0.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-data-redis\2.4.1\spring-boot-starter-data-redis-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\springframework\data\spring-data-redis\2.4.2\spring-data-redis-2.4.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\data\spring-data-keyvalue\2.4.2\spring-data-keyvalue-2.4.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-oxm\5.3.2\spring-oxm-5.3.2.jar;C:\Users\ASUS\.m2\repository\io\lettuce\lettuce-core\6.0.1.RELEASE\lettuce-core-6.0.1.RELEASE.jar;C:\Users\ASUS\.m2\repository\io\netty\netty-common\4.1.55.Final\netty-common-4.1.55.Final.jar;C:\Users\ASUS\.m2\repository\io\netty\netty-handler\4.1.55.Final\netty-handler-4.1.55.Final.jar;C:\Users\ASUS\.m2\repository\io\netty\netty-transport\4.1.55.Final\netty-transport-4.1.55.Final.jar;C:\Users\ASUS\.m2\repository\io\projectreactor\reactor-core\3.4.1\reactor-core-3.4.1.jar;C:\Users\ASUS\.m2\repository\org\reactivestreams\reactive-streams\1.0.3\reactive-streams-1.0.3.jar;C:\Users\ASUS\.m2\repository\org\apache\commons\commons-pool2\2.9.0\commons-pool2-2.9.0.jar;C:\Users\ASUS\.m2\repository\com\github\penggle\kaptcha\2.3.2\kaptcha-2.3.2.jar;C:\Users\ASUS\.m2\repository\com\jhlabs\filters\2.0.235-1\filters-2.0.235-1.jar;C:\Users\ASUS\.m2\repository\com\github\ulisesbocchio\jasypt-spring-boot-starter\2.1.0\jasypt-spring-boot-starter-2.1.0.jar;C:\Users\ASUS\.m2\repository\com\github\ulisesbocchio\jasypt-spring-boot\2.1.0\jasypt-spring-boot-2.1.0.jar;C:\Users\ASUS\.m2\repository\org\jasypt\jasypt\1.9.2\jasypt-1.9.2.jar;C:\Users\ASUS\.m2\repository\com\qiniu\qiniu-java-sdk\7.7.0\qiniu-java-sdk-7.7.0.jar;C:\Users\ASUS\.m2\repository\com\squareup\okhttp3\okhttp\3.14.2\okhttp-3.14.2.jar;C:\Users\ASUS\.m2\repository\com\squareup\okio\okio\1.17.2\okio-1.17.2.jar;C:\Users\ASUS\.m2\repository\com\google\code\gson\gson\2.8.5\gson-2.8.5.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-data-elasticsearch\2.4.1\spring-boot-starter-data-elasticsearch-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\springframework\data\spring-data-elasticsearch\4.1.2\spring-data-elasticsearch-4.1.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-tx\5.3.2\spring-tx-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\data\spring-data-commons\2.4.2\spring-data-commons-2.4.2.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\plugin\transport-netty4-client\7.9.3\transport-netty4-client-7.9.3.jar;C:\Users\ASUS\.m2\repository\io\netty\netty-buffer\4.1.55.Final\netty-buffer-4.1.55.Final.jar;C:\Users\ASUS\.m2\repository\io\netty\netty-codec\4.1.55.Final\netty-codec-4.1.55.Final.jar;C:\Users\ASUS\.m2\repository\io\netty\netty-codec-http\4.1.55.Final\netty-codec-http-4.1.55.Final.jar;C:\Users\ASUS\.m2\repository\io\netty\netty-resolver\4.1.55.Final\netty-resolver-4.1.55.Final.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\client\elasticsearch-rest-high-level-client\7.9.3\elasticsearch-rest-high-level-client-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\elasticsearch\7.9.3\elasticsearch-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\elasticsearch-core\7.9.3\elasticsearch-core-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\elasticsearch-secure-sm\7.9.3\elasticsearch-secure-sm-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\elasticsearch-x-content\7.9.3\elasticsearch-x-content-7.9.3.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\dataformat\jackson-dataformat-smile\2.11.3\jackson-dataformat-smile-2.11.3.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\dataformat\jackson-dataformat-cbor\2.11.3\jackson-dataformat-cbor-2.11.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\elasticsearch-geo\7.9.3\elasticsearch-geo-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-core\8.6.2\lucene-core-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-analyzers-common\8.6.2\lucene-analyzers-common-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-backward-codecs\8.6.2\lucene-backward-codecs-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-grouping\8.6.2\lucene-grouping-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-highlighter\8.6.2\lucene-highlighter-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-join\8.6.2\lucene-join-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-memory\8.6.2\lucene-memory-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-misc\8.6.2\lucene-misc-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-queries\8.6.2\lucene-queries-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-queryparser\8.6.2\lucene-queryparser-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-sandbox\8.6.2\lucene-sandbox-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-spatial-extras\8.6.2\lucene-spatial-extras-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-spatial3d\8.6.2\lucene-spatial3d-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\apache\lucene\lucene-suggest\8.6.2\lucene-suggest-8.6.2.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\elasticsearch-cli\7.9.3\elasticsearch-cli-7.9.3.jar;C:\Users\ASUS\.m2\repository\net\sf\jopt-simple\jopt-simple\5.0.2\jopt-simple-5.0.2.jar;C:\Users\ASUS\.m2\repository\com\carrotsearch\hppc\0.8.1\hppc-0.8.1.jar;C:\Users\ASUS\.m2\repository\joda-time\joda-time\2.10.4\joda-time-2.10.4.jar;C:\Users\ASUS\.m2\repository\com\tdunning\t-digest\3.2\t-digest-3.2.jar;C:\Users\ASUS\.m2\repository\org\hdrhistogram\HdrHistogram\2.1.9\HdrHistogram-2.1.9.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\jna\5.5.0\jna-5.5.0.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\client\elasticsearch-rest-client\7.9.3\elasticsearch-rest-client-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\apache\httpcomponents\httpasyncclient\4.1.4\httpasyncclient-4.1.4.jar;C:\Users\ASUS\.m2\repository\org\apache\httpcomponents\httpcore-nio\4.4.14\httpcore-nio-4.4.14.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\plugin\mapper-extras-client\7.9.3\mapper-extras-client-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\plugin\parent-join-client\7.9.3\parent-join-client-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\plugin\aggs-matrix-stats-client\7.9.3\aggs-matrix-stats-client-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\plugin\rank-eval-client\7.9.3\rank-eval-client-7.9.3.jar;C:\Users\ASUS\.m2\repository\org\elasticsearch\plugin\lang-mustache-client\7.9.3\lang-mustache-client-7.9.3.jar;C:\Users\ASUS\.m2\repository\com\github\spullara\mustache\java\compiler\0.9.6\compiler-0.9.6.jar;C:\Users\ASUS\.m2\repository\com\fasterxml\jackson\core\jackson-core\2.11.3\jackson-core-2.11.3.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-mail\2.4.1\spring-boot-starter-mail-2.4.1.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-context-support\5.3.2\spring-context-support-5.3.2.jar;C:\Users\ASUS\.m2\repository\com\sun\mail\jakarta.mail\1.6.5\jakarta.mail-1.6.5.jar;C:\Users\ASUS\.m2\repository\com\sun\activation\jakarta.activation\1.2.2\jakarta.activation-1.2.2.jar;C:\Users\ASUS\.m2\repository\eu\bitwalker\UserAgentUtils\1.21\UserAgentUtils-1.21.jar;C:\Users\ASUS\.m2\repository\com\github\oshi\oshi-core\6.0.0\oshi-core-6.0.0.jar;C:\Users\ASUS\.m2\repository\net\java\dev\jna\jna-platform\5.10.0\jna-platform-5.10.0.jar;C:\Users\ASUS\.m2\repository\net\java\dev\jna\jna\4.5.2\jna-4.5.2.jar;C:\Users\ASUS\.m2\repository\org\quartz-scheduler\quartz\2.3.2\quartz-2.3.2.jar;C:\Users\ASUS\.m2\repository\com\mchange\mchange-commons-java\0.2.15\mchange-commons-java-0.2.15.jar;C:\Users\ASUS\.m2\repository\cn\hutool\hutool-all\5.8.3\hutool-all-5.8.3.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-all\0.62.2\flexmark-all-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark\0.62.2\flexmark-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-abbreviation\0.62.2\flexmark-ext-abbreviation-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util\0.62.2\flexmark-util-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-admonition\0.62.2\flexmark-ext-admonition-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-anchorlink\0.62.2\flexmark-ext-anchorlink-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-aside\0.62.2\flexmark-ext-aside-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-attributes\0.62.2\flexmark-ext-attributes-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-autolink\0.62.2\flexmark-ext-autolink-0.62.2.jar;C:\Users\ASUS\.m2\repository\org\nibor\autolink\autolink\0.6.0\autolink-0.6.0.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-definition\0.62.2\flexmark-ext-definition-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-emoji\0.62.2\flexmark-ext-emoji-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-enumerated-reference\0.62.2\flexmark-ext-enumerated-reference-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-escaped-character\0.62.2\flexmark-ext-escaped-character-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-footnotes\0.62.2\flexmark-ext-footnotes-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-gfm-issues\0.62.2\flexmark-ext-gfm-issues-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-gfm-strikethrough\0.62.2\flexmark-ext-gfm-strikethrough-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-gfm-tasklist\0.62.2\flexmark-ext-gfm-tasklist-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-gfm-users\0.62.2\flexmark-ext-gfm-users-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-gitlab\0.62.2\flexmark-ext-gitlab-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-jekyll-front-matter\0.62.2\flexmark-ext-jekyll-front-matter-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-jekyll-tag\0.62.2\flexmark-ext-jekyll-tag-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-media-tags\0.62.2\flexmark-ext-media-tags-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-macros\0.62.2\flexmark-ext-macros-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-ins\0.62.2\flexmark-ext-ins-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-xwiki-macros\0.62.2\flexmark-ext-xwiki-macros-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-superscript\0.62.2\flexmark-ext-superscript-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-tables\0.62.2\flexmark-ext-tables-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-toc\0.62.2\flexmark-ext-toc-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-typographic\0.62.2\flexmark-ext-typographic-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-wikilink\0.62.2\flexmark-ext-wikilink-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-yaml-front-matter\0.62.2\flexmark-ext-yaml-front-matter-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-ext-youtube-embedded\0.62.2\flexmark-ext-youtube-embedded-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-html2md-converter\0.62.2\flexmark-html2md-converter-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-jira-converter\0.62.2\flexmark-jira-converter-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-pdf-converter\0.62.2\flexmark-pdf-converter-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\openhtmltopdf\openhtmltopdf-core\1.0.0\openhtmltopdf-core-1.0.0.jar;C:\Users\ASUS\.m2\repository\com\openhtmltopdf\openhtmltopdf-pdfbox\1.0.0\openhtmltopdf-pdfbox-1.0.0.jar;C:\Users\ASUS\.m2\repository\org\apache\pdfbox\pdfbox\2.0.16\pdfbox-2.0.16.jar;C:\Users\ASUS\.m2\repository\org\apache\pdfbox\fontbox\2.0.16\fontbox-2.0.16.jar;C:\Users\ASUS\.m2\repository\org\apache\pdfbox\xmpbox\2.0.16\xmpbox-2.0.16.jar;C:\Users\ASUS\.m2\repository\de\rototor\pdfbox\graphics2d\0.24\graphics2d-0.24.jar;C:\Users\ASUS\.m2\repository\com\openhtmltopdf\openhtmltopdf-rtl-support\1.0.0\openhtmltopdf-rtl-support-1.0.0.jar;C:\Users\ASUS\.m2\repository\com\ibm\icu\icu4j\59.1\icu4j-59.1.jar;C:\Users\ASUS\.m2\repository\com\openhtmltopdf\openhtmltopdf-jsoup-dom-converter\1.0.0\openhtmltopdf-jsoup-dom-converter-1.0.0.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-profile-pegdown\0.62.2\flexmark-profile-pegdown-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-ast\0.62.2\flexmark-util-ast-0.62.2.jar;C:\Users\ASUS\.m2\repository\org\jetbrains\annotations\15.0\annotations-15.0.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-builder\0.62.2\flexmark-util-builder-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-collection\0.62.2\flexmark-util-collection-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-data\0.62.2\flexmark-util-data-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-dependency\0.62.2\flexmark-util-dependency-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-format\0.62.2\flexmark-util-format-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-html\0.62.2\flexmark-util-html-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-misc\0.62.2\flexmark-util-misc-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-options\0.62.2\flexmark-util-options-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-sequence\0.62.2\flexmark-util-sequence-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-util-visitor\0.62.2\flexmark-util-visitor-0.62.2.jar;C:\Users\ASUS\.m2\repository\com\vladsch\flexmark\flexmark-youtrack-converter\0.62.2\flexmark-youtrack-converter-0.62.2.jar;C:\Users\ASUS\.m2\repository\log4j\log4j\1.2.17\log4j-1.2.17.jar;C:\Users\ASUS\.m2\repository\org\jsoup\jsoup\1.14.3\jsoup-1.14.3.jar;C:\Users\ASUS\.m2\repository\org\dom4j\dom4j\2.1.3\dom4j-2.1.3.jar;C:\Users\ASUS\.m2\repository\org\springframework\boot\spring-boot-starter-websocket\1.5.10.RELEASE\spring-boot-starter-websocket-1.5.10.RELEASE.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-messaging\5.3.2\spring-messaging-5.3.2.jar;C:\Users\ASUS\.m2\repository\org\springframework\spring-websocket\5.3.2\spring-websocket-5.3.2.jar;C:\Users\ASUS\.m2\repository\com\aliyun\oss\aliyun-sdk-oss\3.12.0\aliyun-sdk-oss-3.12.0.jar;C:\Users\ASUS\.m2\repository\org\apache\httpcomponents\httpclient\4.5.13\httpclient-4.5.13.jar;C:\Users\ASUS\.m2\repository\org\apache\httpcomponents\httpcore\4.4.14\httpcore-4.4.14.jar;C:\Users\ASUS\.m2\repository\commons-codec\commons-codec\1.15\commons-codec-1.15.jar;C:\Users\ASUS\.m2\repository\org\jdom\jdom2\2.0.6\jdom2-2.0.6.jar;C:\Users\ASUS\.m2\repository\org\codehaus\jettison\jettison\1.1\jettison-1.1.jar;C:\Users\ASUS\.m2\repository\stax\stax-api\1.0.1\stax-api-1.0.1.jar;C:\Users\ASUS\.m2\repository\com\aliyun\aliyun-java-sdk-core\4.5.10\aliyun-java-sdk-core-4.5.10.jar;C:\Users\ASUS\.m2\repository\commons-logging\commons-logging\1.2\commons-logging-1.2.jar;C:\Users\ASUS\.m2\repository\javax\xml\bind\jaxb-api\2.3.1\jaxb-api-2.3.1.jar;C:\Users\ASUS\.m2\repository\javax\activation\javax.activation-api\1.2.0\javax.activation-api-1.2.0.jar;C:\Users\ASUS\.m2\repository\org\jacoco\org.jacoco.agent\0.8.5\org.jacoco.agent-0.8.5-runtime.jar;C:\Users\ASUS\.m2\repository\org\ini4j\ini4j\0.5.4\ini4j-0.5.4.jar;C:\Users\ASUS\.m2\repository\io\opentracing\opentracing-api\0.33.0\opentracing-api-0.33.0.jar;C:\Users\ASUS\.m2\repository\io\opentracing\opentracing-util\0.33.0\opentracing-util-0.33.0.jar;C:\Users\ASUS\.m2\repository\io\opentracing\opentracing-noop\0.33.0\opentracing-noop-0.33.0.jar;C:\Users\ASUS\.m2\repository\com\aliyun\aliyun-java-sdk-ram\3.1.0\aliyun-java-sdk-ram-3.1.0.jar;C:\Users\ASUS\.m2\repository\com\aliyun\aliyun-java-sdk-kms\2.11.0\aliyun-java-sdk-kms-2.11.0.jar;C:\Users\ASUS\.m2\repository\org\lionsoul\ip2region\2.7.0\ip2region-2.7.0.jar;C:\Users\ASUS\.m2\repository\com\anji-plus\spring-boot-starter-captcha\1.3.0\spring-boot-starter-captcha-1.3.0.jar;C:\Users\ASUS\.m2\repository\com\anji-plus\captcha\1.3.0\captcha-1.3.0.jar com.rulin.BlogApplication SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/Users/ASUS/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/C:/Users/ASUS/.m2/repository/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder] ____ ____ _____ U| _"\ u U /"___|u |"_ /u \| |_) |/ \| | _ / U / // | __/ | |_| | \/ /_ |_| \____| /____| ||>>_ _)(|_ _//<<,- (__)__) (__)__) (__) (_/ Gitee: https://gitee.com/chengxuru/rulin-blog 2025-06-27 16:08:19.529 [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext:596 - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'apiArticleController' defined in file [E:\AliCloud\rulin-blog\pblog\target\classes\com\rulin\controller\api\ApiArticleController.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.rulin.service.ArticleService' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {} 2025-06-27 16:08:19.578 [main] ERROR o.s.b.diagnostics.LoggingFailureAnalysisReporter:40 - *************************** APPLICATION FAILED TO START *************************** Description: Parameter 0 of constructor in com.rulin.controller.api.ApiArticleController required a bean of type 'com.rulin.service.ArticleService' that could not be found. Action: Consider defining a bean of type 'com.rulin.service.ArticleService' in your configuration. 进程已结束,退出代码为 1

filetype

Creating a WooCommerce reseller plugin involves several steps. Below is a structured guide to help you build a basic reseller system. This guide assumes you have a working knowledge of WordPress/WooCommerce development, PHP, and MySQL. --- ### **1. Plugin Structure** Create a new folder in `wp-content/plugins/` (e.g., `reseller-plugin`), and add the following files: - `reseller-plugin.php` (Main plugin file) - `includes/` (Folder for helper functions) - `admin/` (Admin-facing code) - `public/` (Frontend-facing code) --- ### **2. Core Features** Your plugin should include: - **Reseller Registration/Management** - **Commission Calculation** - **Product Assignment to Resellers** - **Order Tracking** - **Dashboard for Resellers** - **Payment Handling** --- ### **3. Step-by-Step Implementation** #### **Step 1: Register a Reseller User Role** Add a custom user role (`reseller`) with specific capabilities. ```php // In reseller-plugin.php function register_reseller_role() { add_role( 'reseller', 'Reseller', array( 'read' => true, 'edit_posts' => false, 'delete_posts' => false, 'manage_woocommerce' => true, 'view_woocommerce_reports' => true, ) ); } register_activation_hook(__FILE__, 'register_reseller_role'); ``` --- #### **Step 2: Add Reseller Commission Settings** Allow resellers to set their commission rate (e.g., in their profile). ```php // Add commission field to user profile function reseller_commission_field($user) { if (in_array('reseller', $user->roles)) { ?>

Reseller Settings

<label for="commission_rate">Commission Rate (%)</label> <input type="number" name="commission_rate" id="commission_rate" value="<?php echo esc_attr(get_user_meta($user->ID, 'commission_rate', true)); ?>" class="regular-text" min="0" max="100" step="0.1">
<?php } } add_action('show_user_profile', 'reseller_commission_field'); add_action('edit_user_profile', 'reseller_commission_field'); // Save commission rate function save_reseller_commission_field($user_id) { if (current_user_can('edit_user', $user_id)) { update_user_meta($user_id, 'commission_rate', sanitize_text_field($_POST['commission_rate'])); } } add_action('personal_options_update', 'save_reseller_commission_field'); add_action('edit_user_profile_update', 'save_reseller_commission_field'); ``` --- #### **Step 3: Assign Products to Resellers** Add a meta box to WooCommerce products to link them to a reseller. ```php // Add reseller dropdown to product editor function reseller_product_meta_box() { add_meta_box( 'reseller_product_meta', 'Reseller Settings', 'reseller_product_meta_callback', 'product', 'side', 'default' ); } add_action('add_meta_boxes', 'reseller_product_meta_box'); function reseller_product_meta_callback($post) { $resellers = get_users(array('role' => 'reseller')); $selected_reseller = get_post_meta($post->ID, '_reseller_id', true); ?> <label for="reseller_id">Assign to Reseller:</label> <select name="reseller_id" id="reseller_id" class="widefat"> <option value="">None</option> <?php foreach ($resellers as $reseller) : ?> <option value="<?php echo $reseller->ID; ?>" <?php selected($selected_reseller, $reseller->ID); ?>> <?php echo $reseller->display_name; ?> </option> <?php endforeach; ?> </select> <?php } // Save reseller assignment function save_reseller_product_meta($post_id) { if (isset($_POST['reseller_id'])) { update_post_meta($post_id, '_reseller_id', absint($_POST['reseller_id'])); } } add_action('save_post_product', 'save_reseller_product_meta'); ``` --- #### **Step 4: Calculate Commission on Order Completion** Hook into WooCommerce order completion to calculate commissions. ```php function calculate_reseller_commission($order_id) { $order = wc_get_order($order_id); foreach ($order->get_items() as $item) { $product_id = $item->get_product_id(); $reseller_id = get_post_meta($product_id, '_reseller_id', true); if ($reseller_id) { $commission_rate = get_user_meta($reseller_id, 'commission_rate', true); $total = $item->get_total(); $commission = ($total * $commission_rate) / 100; // Store commission in a custom table or option update_user_meta($reseller_id, 'pending_commission', $commission, true); } } } add_action('woocommerce_order_status_completed', 'calculate_reseller_commission'); ``` --- #### **Step 5: Create a Reseller Dashboard** Add a shortcode for resellers to view their earnings and products. ```php // Shortcode for reseller dashboard function reseller_dashboard_shortcode() { if (!current_user_can('reseller')) return; $reseller_id = get_current_user_id(); $commission = get_user_meta($reseller_id, 'pending_commission', true); $products = get_posts(array( 'post_type' => 'product', 'meta_key' => '_reseller_id', 'meta_value' => $reseller_id, )); ob_start(); ?>

Your Commission: $<?php echo $commission; ?>

Your Products

    <?php foreach ($products as $product) : ?>
  • <?php echo $product->post_title; ?>
  • <?php endforeach; ?>
<?php return ob_get_clean(); } add_shortcode('reseller_dashboard', 'reseller_dashboard_shortcode'); ``` --- #### **Step 6: Handle Payouts (Manual for Now)** Create an admin page to mark commissions as paid. ```php // Add admin menu for reseller payouts function reseller_payouts_menu() { add_submenu_page( 'users.php', 'Reseller Payouts', 'Payouts', 'manage_options', 'reseller-payouts', 'reseller_payouts_page' ); } add_action('admin_menu', 'reseller_payouts_menu'); function reseller_payouts_page() { // Logic to display and process payouts } ``` --- ### **4. Extend Functionality** - **Automated Payouts**: Integrate PayPal/Stripe API for automatic payments. - **Reports**: Use WooCommerce's `WC_Admin_Report` class for sales reports. - **Multi-Level Resellers**: Add support for tiers (e.g., resellers can have sub-resellers). --- ### **5. Testing & Security** - Test with different user roles and product types. - Sanitize all inputs and use nonces for forms. - Use WordPress transients for caching commission data. --- ### **6. Documentation** - Provide setup instructions for users. - Explain how resellers can manage their products and track earnings. --- ### **Final Notes** This is a basic framework. For a production plugin, you’ll need to: - Add error handling. - Optimize database queries. - Follow WordPress coding standards. - Consider using namespaces and autoloading. Would you like me to elaborate on any specific part (e.g., automated payouts, reporting, or security)? 中文翻译原文输出
filetype

2025-09-06 02:39:42.209 11752-12383 ADB_SERVICES adbd I post waitpid (pid=12382) status=0000 2025-09-06 02:39:42.209 11752-11752 ADB_SERVICES adbd I for fd 32, revents = 2011 2025-09-06 02:39:42.210 11752-11752 ADB_SERVICES adbd I for fd 32, revents = 2011 2025-09-06 02:39:42.359 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:42.353 626-626 ata_acm [email protected] W type=1400 audit(0.0:19938): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:42.860 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:42.853 626-626 ata_acm [email protected] W type=1400 audit(0.0:19939): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:43.361 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:43.353 626-626 ata_acm [email protected] W type=1400 audit(0.0:19940): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:43.862 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:43.853 626-626 ata_acm [email protected] W type=1400 audit(0.0:19941): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:43.922 1095-1191 PowerWrap system_server I PowerHal_TouchBoost 2025-09-06 02:39:43.960 459-459 SurfaceFlinger surfaceflinger I [Built-in Screen (type:0)] fps:2.280133,dur:4824.28,max:4354.80,min:6.24 2025-09-06 02:39:43.983 1095-1191 PowerWrap system_server I PowerHal_TouchBoost 2025-09-06 02:39:39.867 1095-1106 chatty system_server I uid=1000(system) Binder:1095_2 identical 2 lines 2025-09-06 02:39:39.868 1095-1106 NetworkStatsRecorder system_server W unknown interfaces [wlan0, lo], ignoring those stats 2025-09-06 02:39:43.989 12323-12323 Timeline person.tools.treasurebox I Timeline: Activity_launch_request time:8410125 2025-09-06 02:39:43.991 1095-1106 ActivityManager system_server I START u0 {cmp=person.tools.treasurebox/.customview.view.LineChartMarkerActivity} from uid 10135 2025-09-06 02:39:43.992 1095-1106 BoostFramework system_server E BoostFramework() : Exception_1 = java.lang.ClassNotFoundException: com.qualcomm.qti.Performance 2025-09-06 02:39:43.993 1095-1106 BoostFramework system_server E BoostFramework() Ux Perf: Exception = java.lang.ClassNotFoundException: com.qualcomm.qti.UxPerformance 2025-09-06 02:39:43.993 1095-1106 BoostFramework system_server E BoostFramework() : Exception_1 = java.lang.ClassNotFoundException: com.qualcomm.qti.Performance 2025-09-06 02:39:43.994 1095-1106 BoostFramework system_server E BoostFramework() Ux Perf: Exception = java.lang.ClassNotFoundException: com.qualcomm.qti.UxPerformance 2025-09-06 02:39:43.995 440-473 [email protected] [email protected] I powerHintAsync hint:8, data:1 2025-09-06 02:39:43.996 440-472 libPowerHal [email protected] I 8: cpu_ctrl set freq: 2001000 -1 1500000 -1 2025-09-06 02:39:44.000 1095-1106 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:44.009 1095-1106 chatty system_server I uid=1000(system) Binder:1095_2 identical 2 lines 2025-09-06 02:39:44.010 1095-1106 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:44.014 440-473 [email protected] [email protected] I notifyAppState_2_1 pack:person.tools.treasurebox, act:person.tools.treasurebox.customview.view.LineChartMarkerActivity, pid:12323, uid:10135, state:1 2025-09-06 02:39:44.016 1095-1106 Timeline system_server I Timeline: App_transition_ready time:8410151 2025-09-06 02:39:44.021 12323-12323 ActivityThread person.tools.treasurebox W handleWindowVisibility: no activity for token android.os.BinderProxy@9149a1f 2025-09-06 02:39:44.064 12323-12356 ViewContentFactory person.tools.treasurebox D initViewContentFetcherClass 2025-09-06 02:39:44.065 12323-12356 ContentCatcher person.tools.treasurebox I ViewContentFetcher : ViewContentFetcher 2025-09-06 02:39:44.065 12323-12356 ViewContentFactory person.tools.treasurebox D createInterceptor took 1ms 2025-09-06 02:39:44.094 12323-12323 AndroidRuntime person.tools.treasurebox D Shutting down VM 2025-09-06 02:39:44.098 12323-12323 AndroidRuntime person.tools.treasurebox E FATAL EXCEPTION: main Process: person.tools.treasurebox, PID: 12323 java.lang.RuntimeException: Unable to start activity ComponentInfo{person.tools.treasurebox/person.tools.treasurebox.customview.view.LineChartMarkerActivity}: java.lang.NullPointerException: Attempt to invoke virtual method 'com.github.mikephil.charting.components.Description com.github.mikephil.charting.charts.LineChart.getDescription()' on a null object reference at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2976) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3113) at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:78) at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:113) at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:71) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1858) at android.os.Handler.dispatchMessage(Handler.java:106) at android.os.Looper.loop(Looper.java:201) at android.app.ActivityThread.main(ActivityThread.java:6820) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:547) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:922) Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'com.github.mikephil.charting.components.Description com.github.mikephil.charting.charts.LineChart.getDescription()' on a null object reference at person.tools.treasurebox.customview.view.LineChartMarkerActivity.setupLineChart(LineChartMarkerActivity.java:44) at person.tools.treasurebox.customview.view.LineChartMarkerActivity.initView(LineChartMarkerActivity.java:33) at person.tools.treasurebox.customview.view.LineChartMarkerActivity.onCreate(LineChartMarkerActivity.java:27) at android.app.Activity.performCreate(Activity.java:7224) at android.app.Activity.performCreate(Activity.java:7213) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1272) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2956) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3113)  at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:78)  at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:113)  at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:71)  at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1858)  at android.os.Handler.dispatchMessage(Handler.java:106)  at android.os.Looper.loop(Looper.java:201)  at android.app.ActivityThread.main(ActivityThread.java:6820)  at java.lang.reflect.Method.invoke(Native Method)  at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:547)  at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:922)  2025-09-06 02:39:44.111 440-473 [email protected] [email protected] I notifyAppState_2_1 pack:person.tools.treasurebox, act:person.tools.treasurebox, pid:12323, uid:10135, state:3 2025-09-06 02:39:44.111 2030-2053 octvm_klo mcd I klo lock 2025-09-06 02:39:44.113 1095-1106 ActivityManager system_server W Force finishing activity person.tools.treasurebox/.customview.view.LineChartMarkerActivity 2025-09-06 02:39:44.115 1095-1106 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:44.118 1095-1106 ActivityManager system_server W Force finishing activity person.tools.treasurebox/.customview.view.CustomViewTestActivity 2025-09-06 02:39:44.115 1095-1106 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:44.120 1095-12385 AES system_server W Exception Log handling... 2025-09-06 02:39:44.120 1095-12385 AES system_server W Skipped - do not care third party apk 2025-09-06 02:39:44.127 2030-2053 octvm_klo mcd I get wanted event[mask:128, name:[email protected]] from the watchset 2025-09-06 02:39:44.127 1095-12386 ContextImpl system_server W Calling a method in the system process without a qualified user: android.app.ContextImpl.bindService:1622 android.content.ContextWrapper.bindService:708 miui.os.DropBoxManager.ds:361 miui.os.DropBoxManager.a:350 miui.os.DropBoxManager.addText:314 2025-09-06 02:39:44.128 2030-2053 octvm_klo mcd I start gathering logcat log... 2025-09-06 02:39:44.129 1095-1106 ActivityManager system_server D report kill process: killerPid is:12323, killedPid is:12323 2025-09-06 02:39:44.129 12323-12323 Process person.tools.treasurebox I Sending signal. PID: 12323 SIG: 9 2025-09-06 02:39:44.140 2219-2555 JavaExceptionHandler com.miui.daemon W Too noisy! skip duplicate java exception report:person.tools.treasurebox now=1757097584140 mLastReportTime=1757097530813 interval=60000 2025-09-06 02:39:44.145 2030-2053 octvm_klo mcd I gathering logcat log done 2025-09-06 02:39:44.145 2030-2053 octvm_klo mcd I klo unlock 2025-09-06 02:39:44.149 1095-1190 InputDispatcher system_server W channel 'f025fa4 person.tools.treasurebox/person.tools.treasurebox.customview.view.CustomViewTestActivity (server)' ~ Consumer closed input channel or an error occurred. events=0x9 2025-09-06 02:39:44.149 1095-1190 InputDispatcher system_server E channel 'f025fa4 person.tools.treasurebox/person.tools.treasurebox.customview.view.CustomViewTestActivity (server)' ~ Channel is unrecoverably broken and will be disposed! 2025-09-06 02:39:44.150 1095-1190 InputDispatcher system_server W channel '6802547 person.tools.treasurebox/person.tools.treasurebox.dashboard.view.MainActivity (server)' ~ Consumer closed input channel or an error occurred. events=0x9 2025-09-06 02:39:44.150 1095-1190 InputDispatcher system_server E channel '6802547 person.tools.treasurebox/person.tools.treasurebox.dashboard.view.MainActivity (server)' ~ Channel is unrecoverably broken and will be disposed! 2025-09-06 02:39:44.150 11752-11752 ADB_SERVICES adbd I for fd 18, revents = 2011 2025-09-06 02:39:44.150 1095-1106 WindowManager system_server I WIN DEATH: Window{6802547 u0 person.tools.treasurebox/person.tools.treasurebox.dashboard.view.MainActivity} 2025-09-06 02:39:44.150 1095-1106 InputDispatcher system_server W Attempted to unregister already unregistered input channel '6802547 person.tools.treasurebox/person.tools.treasurebox.dashboard.view.MainActivity (server)' 2025-09-06 02:39:44.150 1095-10052 ActivityManager system_server I Process person.tools.treasurebox (pid 12323) has died: fore TOP 2025-09-06 02:39:44.150 1095-1114 libprocessgroup system_server W kill(-12323, 9) failed: No such process 2025-09-06 02:39:44.152 11752-11752 ADB_SERVICES adbd I for fd 18, revents = 2011 2025-09-06 02:39:44.152 440-473 [email protected] [email protected] I notifyAppState_2_1 pack:person.tools.treasurebox, act:person.tools.treasurebox, pid:12323, uid:10135, state:3 2025-09-06 02:39:44.155 1095-2166 WindowManager system_server I WIN DEATH: Window{f025fa4 u0 person.tools.treasurebox/person.tools.treasurebox.customview.view.CustomViewTestActivity} 2025-09-06 02:39:44.155 1095-2166 InputDispatcher system_server W Attempted to unregister already unregistered input channel 'f025fa4 person.tools.treasurebox/person.tools.treasurebox.customview.view.CustomViewTestActivity (server)' 2025-09-06 02:39:44.156 1095-1114 libprocessgroup system_server W kill(-12323, 9) failed: No such process 2025-09-06 02:39:44.156 1095-1114 libprocessgroup system_server I Successfully killed process cgroup uid 10135 pid 12323 in 5ms 2025-09-06 02:39:44.163 1285-1285 EventBus com.android.systemui D [1285, u0] send(AppTransitionFinishedEvent) 2025-09-06 02:39:44.163 1285-1285 EventBus com.android.systemui D [1285, u0] -> ForcedResizableInfoActivityController [0x976a029, P1] onBusEvent(AppTransitionFinishedEvent) 2025-09-06 02:39:44.163 1285-1285 EventBus com.android.systemui D [1285, u0] onBusEvent(AppTransitionFinishedEvent) duration: 19 microseconds, avg: 876 2025-09-06 02:39:44.167 459-1186 SurfaceFlinger surfaceflinger W Attempting to set client state on removed layer: person.tools.treasurebox/person.tools.treasurebox.customview.view.CustomViewTestActivity#0 2025-09-06 02:39:44.167 459-1186 SurfaceFlinger surfaceflinger W Attempting to destroy on removed layer: person.tools.treasurebox/person.tools.treasurebox.customview.view.CustomViewTestActivity#0 2025-09-06 02:39:44.169 1285-1285 EventBus com.android.systemui D [1285, u0] send(AppTransitionFinishedEvent) 2025-09-06 02:39:44.169 1095-10052 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:44.169 1285-1285 EventBus com.android.systemui D [1285, u0] -> ForcedResizableInfoActivityController [0x976a029, P1] onBusEvent(AppTransitionFinishedEvent) 2025-09-06 02:39:44.169 1285-1285 EventBus com.android.systemui D [1285, u0] onBusEvent(AppTransitionFinishedEvent) duration: 19 microseconds, avg: 873 2025-09-06 02:39:44.169 1095-10052 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:44.171 1095-10052 chatty system_server I uid=1000(system) Binder:1095_20 identical 1 line 2025-09-06 02:39:44.171 1095-10052 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:44.173 459-1186 SurfaceFlinger surfaceflinger I [SF client] Remove(0xaea97940) for (1095:system_server) 2025-09-06 02:39:44.177 1095-1170 ViewRootIm...easurebox] system_server D hardware acceleration = false , fakeHwAccelerated = true, sRendererDisabled = false, forceHwAccelerated = false, sSystemRendererDisabled = false 2025-09-06 02:39:44.179 454-454 APM_AudioPolicyManager audioserver D AudioPolicyManager:setRecordSilenced(uid:10135, silenced:1) 2025-09-06 02:39:44.179 454-454 AudioFlinger audioserver D AudioFlinger::setRecordSilenced(uid:10135, silenced:1) 2025-09-06 02:39:44.179 1095-1119 ActivityManager system_server W setHasOverlayUi called on unknown pid: 12323 2025-09-06 02:39:44.195 1095-1113 Boost system_server D hostingType=activity, hostingName=person.tools.treasurebox/.dashboard.view.MainActivity, callerPackage=null, isSystem=true, isBoostNeeded=false. 2025-09-06 02:39:44.197 1095-1113 ActivityManager system_server I Start proc 12389:person.tools.treasurebox/u0a135 for activity person.tools.treasurebox/.dashboard.view.MainActivity caller=null 2025-09-06 02:39:44.202 12389-12389 ols.treasurebo pid-12389 I Late-enabling -Xcheck:jni 2025-09-06 02:39:44.226 1095-1170 Surface system_server D lockCanvas 2025-09-06 02:39:44.226 1095-1170 Surface system_server D Surface::connect(this=0x8f30b000,api=2) 2025-09-06 02:39:44.252 11752-11752 ADB_SERVICES adbd I local_socket_flush_incoming write_data=2497352 2025-09-06 02:39:44.256 11752-11752 ADB_SERVICES adbd I service_to_fd shell:stat -c %u /proc/12389 | xargs -n 1 cmd package list packages --uid 2025-09-06 02:39:44.267 12389-12389 libc pid-12389 E Access denied finding property "persist.vendor.sys.activitylog" 2025-09-06 02:39:44.263 12389-12389 re-initialized> pid-12389 W type=1400 audit(0.0:19942): avc: denied { read } for name="u:object_r:mtk_amslog_prop:s0" dev="tmpfs" ino=9762 scontext=u:r:untrusted_app:s0:c135,c256,c512,c768 tcontext=u:object_r:mtk_amslog_prop:s0 tclass=file permissive=0 2025-09-06 02:39:44.278 440-473 [email protected] [email protected] I notifyAppState_2_1 pack:person.tools.treasurebox, act:person.tools.treasurebox.dashboard.view.MainActivity, pid:12389, uid:10135, state:1 2025-09-06 02:39:44.280 1095-1170 Timeline system_server I Timeline: App_transition_ready time:8410416 2025-09-06 02:39:44.280 1095-1170 Timeline system_server I Timeline: App_transition_stopped time:8410416 2025-09-06 02:39:44.283 1285-1285 EventBus com.android.systemui D [1285, u0] send(AppTransitionFinishedEvent) 2025-09-06 02:39:44.283 1285-1285 EventBus com.android.systemui D [1285, u0] -> ForcedResizableInfoActivityController [0x976a029, P1] onBusEvent(AppTransitionFinishedEvent) 2025-09-06 02:39:44.283 1285-1285 EventBus com.android.systemui D [1285, u0] onBusEvent(AppTransitionFinishedEvent) duration: 30 microseconds, avg: 869 2025-09-06 02:39:44.283 454-8468 APM_AudioPolicyManager audioserver D AudioPolicyManager:setRecordSilenced(uid:10135, silenced:0) 2025-09-06 02:39:44.284 454-8468 AudioFlinger audioserver D AudioFlinger::setRecordSilenced(uid:10135, silenced:0) 2025-09-06 02:39:44.286 1285-1285 EventBus com.android.systemui D [1285, u0] send(AppTransitionFinishedEvent) 2025-09-06 02:39:44.286 1285-1285 EventBus com.android.systemui D [1285, u0] -> ForcedResizableInfoActivityController [0x976a029, P1] onBusEvent(AppTransitionFinishedEvent) ---------------------------- PROCESS STARTED (12389) for package person.tools.treasurebox ---------------------------- 2025-09-06 02:39:44.286 1285-1285 EventBus com.android.systemui D [1285, u0] onBusEvent(AppTransitionFinishedEvent) duration: 17 microseconds, avg: 865 2025-09-06 02:39:44.362 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:44.353 626-626 ata_acm [email protected] W type=1400 audit(0.0:19943): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:44.386 11752-11752 ADB_SERVICES adbd I for fd 29, revents = 10 2025-09-06 02:39:44.387 11752-12403 ADB_SERVICES adbd I post waitpid (pid=12402) status=0000 2025-09-06 02:39:44.531 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden field Landroid/os/Trace;->TRACE_TAG_APP:J (light greylist, reflection) 2025-09-06 02:39:44.531 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/os/Trace;->isTagEnabled(J)Z (light greylist, reflection) 2025-09-06 02:39:44.585 12389-12389 AppCompatDelegate person.tools.treasurebox D Checking for metadata for AppLocalesMetadataHolderService : Service not found 2025-09-06 02:39:44.643 12389-12389 Binder:intercep person.tools.treasurebox W type=1400 audit(0.0:19944): avc: denied { getattr } for path="/data/data/com.miui.contentcatcher" dev="dm-2" ino=3088579 scontext=u:r:untrusted_app:s0:c135,c256,c512,c768 tcontext=u:object_r:system_app_data_file:s0 tclass=dir permissive=0 2025-09-06 02:39:44.657 12389-12414 ViewContentFactory person.tools.treasurebox D initViewContentFetcherClass 2025-09-06 02:39:44.657 12389-12414 ViewContentFactory person.tools.treasurebox D getInterceptorPackageInfo 2025-09-06 02:39:44.657 12389-12414 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/app/AppGlobals;->getInitialApplication()Landroid/app/Application; (light greylist, linking) 2025-09-06 02:39:44.658 12389-12414 ViewContentFactory person.tools.treasurebox D getInitialApplication took 2ms 2025-09-06 02:39:44.659 12389-12414 ViewContentFactory person.tools.treasurebox D packageInfo.packageName: com.miui.catcherpatch 2025-09-06 02:39:44.663 12389-12389 Binder:intercep person.tools.treasurebox W type=1400 audit(0.0:19945): avc: denied { getattr } for path="/data/data/com.miui.catcherpatch" dev="dm-2" ino=3114656 scontext=u:r:untrusted_app:s0:c135,c256,c512,c768 tcontext=u:object_r:system_app_data_file:s0 tclass=dir permissive=0 2025-09-06 02:39:44.677 12389-12414 ViewContentFactory person.tools.treasurebox D initViewContentFetcherClass took 21ms 2025-09-06 02:39:44.678 12389-12414 ContentCatcher person.tools.treasurebox I ViewContentFetcher : ViewContentFetcher 2025-09-06 02:39:44.678 12389-12414 ViewContentFactory person.tools.treasurebox D createInterceptor took 22ms 2025-09-06 02:39:44.697 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/view/View;->computeFitSystemWindows(Landroid/graphics/Rect;Landroid/graphics/Rect;)Z (light greylist, reflection) 2025-09-06 02:39:44.731 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/graphics/FontFamily;-><init>()V (light greylist, reflection) 2025-09-06 02:39:44.731 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/graphics/FontFamily;->addFontFromAssetManager(Landroid/content/res/AssetManager;Ljava/lang/String;IZIII[Landroid/graphics/fonts/FontVariationAxis;)Z (light greylist, reflection) 2025-09-06 02:39:44.732 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/graphics/FontFamily;->addFontFromBuffer(Ljava/nio/ByteBuffer;I[Landroid/graphics/fonts/FontVariationAxis;II)Z (light greylist, reflection) 2025-09-06 02:39:44.732 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/graphics/FontFamily;->freeze()Z (light greylist, reflection) 2025-09-06 02:39:44.732 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/graphics/FontFamily;->abortCreation()V (light greylist, reflection) 2025-09-06 02:39:44.732 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/graphics/Typeface;->createFromFamiliesWithDefault([Landroid/graphics/FontFamily;Ljava/lang/String;II)Landroid/graphics/Typeface; (light greylist, reflection) 2025-09-06 02:39:44.798 12389-12389 BoostFramework person.tools.treasurebox E BoostFramework() : Exception_1 = java.lang.ClassNotFoundException: com.qualcomm.qti.Performance 2025-09-06 02:39:44.798 12389-12389 BoostFramework person.tools.treasurebox E BoostFramework() Ux Perf: Exception = java.lang.ClassNotFoundException: com.qualcomm.qti.UxPerformance 2025-09-06 02:39:44.799 12389-12389 BoostFramework person.tools.treasurebox E BoostFramework() : Exception_1 = java.lang.ClassNotFoundException: com.qualcomm.qti.Performance 2025-09-06 02:39:44.799 12389-12389 BoostFramework person.tools.treasurebox E BoostFramework() Ux Perf: Exception = java.lang.ClassNotFoundException: com.qualcomm.qti.UxPerformance 2025-09-06 02:39:44.849 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/view/ViewGroup;->makeOptionalFitsSystemWindows()V (light greylist, reflection) 2025-09-06 02:39:44.863 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:44.926 12389-12389 SurfaceFactory person.tools.treasurebox I [static] sSurfaceFactory = com.mediatek.view.impl.SurfaceFactoryImpl@2b38fec 2025-09-06 02:39:44.941 12389-12389 ViewRootIm...nActivity] person.tools.treasurebox D hardware acceleration = true , fakeHwAccelerated = false, sRendererDisabled = false, forceHwAccelerated = false, sSystemRendererDisabled = false 2025-09-06 02:39:44.944 459-1186 SurfaceFlinger surfaceflinger I [SF client] NEW(0xac6fe700) for (1095:system_server) 2025-09-06 02:39:44.949 12389-12389 PhoneWindow person.tools.treasurebox V DecorView setVisiblity: visibility = 0, Parent = android.view.ViewRootImpl@3e639bb, this = DecorView@baaaede[MainActivity] 2025-09-06 02:39:44.950 1095-1196 UiModeManager system_server V switch night mode to 1 2025-09-06 02:39:44.975 459-459 SurfaceFlinger surfaceflinger I [Built-in Screen (type:0)] fps:11.821832,dur:1015.07,max:665.79,min:14.89 2025-09-06 02:39:45.006 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/os/Trace;->asyncTraceBegin(JLjava/lang/String;I)V (light greylist, reflection) 2025-09-06 02:39:45.006 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/os/Trace;->asyncTraceEnd(JLjava/lang/String;I)V (light greylist, reflection) 2025-09-06 02:39:45.007 12389-12389 ols.treasurebo person.tools.treasurebox W Accessing hidden method Landroid/os/Trace;->traceCounter(JLjava/lang/String;I)V (light greylist, reflection) 2025-09-06 02:39:45.131 1095-2166 WindowManager system_server I Relayout Window{4359870 u0 person.tools.treasurebox/person.tools.treasurebox.dashboard.view.MainActivity}: oldVis=4 newVis=0 focusMayChange = true 2025-09-06 02:39:45.150 12389-12389 Surface person.tools.treasurebox D Surface::allocateBuffers(this=0xa2b88000) 2025-09-06 02:39:45.159 12389-12415 ConfigStore person.tools.treasurebox I android::hardware::configstore::V1_0::ISurfaceFlingerConfigs::hasWideColorDisplay retrieved: 0 2025-09-06 02:39:45.159 12389-12415 ConfigStore person.tools.treasurebox I android::hardware::configstore::V1_0::ISurfaceFlingerConfigs::hasHDRDisplay retrieved: 0 2025-09-06 02:39:45.159 12389-12415 OpenGLRenderer person.tools.treasurebox I Initialized EGL, version 1.4 2025-09-06 02:39:45.159 12389-12415 OpenGLRenderer person.tools.treasurebox D Swap behavior 2 2025-09-06 02:39:45.178 12389-12415 Surface person.tools.treasurebox D Surface::connect(this=0xa2b88000,api=1) 2025-09-06 02:39:45.181 12389-12415 libEGL person.tools.treasurebox I [MTK Game SDK] low_latency_mode(0) pid(-1) property(-1) 2025-09-06 02:39:45.195 12389-12389 Looper person.tools.treasurebox W Slow Looper main: doFrame is 395ms late because of 3 msg, msg 1 took 401ms (late=60ms h=android.app.ActivityThread$H w=159) 2025-09-06 02:39:45.270 12389-12415 ion person.tools.treasurebox E ioctl c0044901 failed with code -1: Invalid argument 2025-09-06 02:39:45.316 1095-1170 View system_server D [Warning] assignParent to null: this = DecorView@26afc6e[treasurebox] 2025-09-06 02:39:45.316 1095-1119 ActivityManager system_server I Displayed person.tools.treasurebox/.dashboard.view.MainActivity: +1s142ms (total +1s303ms) 2025-09-06 02:39:45.317 1095-1119 Timeline system_server I Timeline: Activity_windows_visible id: ActivityRecord{6668306 u0 person.tools.treasurebox/.dashboard.view.MainActivity t51} time:8411452 2025-09-06 02:39:45.326 1095-1170 Surface system_server D Surface::disconnect(this=0x8f30b000,api=2) 2025-09-06 02:39:45.328 459-495 SurfaceFlinger surfaceflinger W Attempting to set client state on removed layer: Splash Screen person.tools.treasurebox#0 2025-09-06 02:39:45.328 459-495 SurfaceFlinger surfaceflinger W Attempting to destroy on removed layer: Splash Screen person.tools.treasurebox#0 2025-09-06 02:39:45.353 626-626 ata_acm [email protected] W type=1400 audit(0.0:19947): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:45.363 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:45.383 440-473 [email protected] [email protected] I powerHintAsync hint:8, data:0 2025-09-06 02:39:45.386 1095-2166 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:45.386 1095-2166 PowerWrap system_server I PowerHal_Wrap_querySysInfo 2025-09-06 02:39:45.387 1095-2166 PowerHalWrapper system_server E <amsBoostStop> duration: 6000ms 2025-09-06 02:39:45.387 1095-2166 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:45.387 1095-2166 PowerWrap system_server I PowerHal_Wrap_mtkPowerHint 2025-09-06 02:39:45.864 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:45.853 626-626 ata_acm [email protected] W type=1400 audit(0.0:19948): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:46.065 454-8468 APM_AudioPolicyManager audioserver D AudioPolicyManager:setRecordSilenced(uid:10042, silenced:1) 2025-09-06 02:39:46.065 454-8468 AudioFlinger audioserver D AudioFlinger::setRecordSilenced(uid:10042, silenced:1) 2025-09-06 02:39:46.362 1095-1231 BatteryService system_server D /data/anr/adb_enable file.exists() = false mPlugType==2 2025-09-06 02:39:46.364 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:46.363 626-626 ata_acm [email protected] W type=1400 audit(0.0:19949): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:46.365 1285-1285 KeyguardUpdateMonitor com.android.systemui D received broadcast android.intent.action.BATTERY_CHANGED 2025-09-06 02:39:46.365 2487-2487 BatteryInfoReceiver com.miui.securitycenter.remote I ACTION_BATTERY_CHANGED 2025-09-06 02:39:46.366 5434-5518 PowerCheckerService com.miui.powerkeeper D onBatteryChanged, mBatteryLevel = 100, status = 5, level = 100, plug = 2, scale = 100 2025-09-06 02:39:46.368 463-463 MTK_FG fuelgauged W fd < 0, init first! 2025-09-06 02:39:46.368 463-463 MTK_FG fuelgauged E init failed, return! 2025-09-06 02:39:46.368 463-463 MTK_FG fuelgauged W fd < 0, init first! 2025-09-06 02:39:46.368 463-463 MTK_FG fuelgauged E init failed, return! 2025-09-06 02:39:46.384 1095-1231 BatteryService system_server D /data/anr/adb_enable file.exists() = false mPlugType==2 2025-09-06 02:39:46.865 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:46.863 626-626 ata_acm [email protected] W type=1400 audit(0.0:19950): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:47.365 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:47.363 626-626 ata_acm [email protected] W type=1400 audit(0.0:19951): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:47.866 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:47.863 626-626 ata_acm [email protected] W type=1400 audit(0.0:19952): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:48.366 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:48.363 626-626 ata_acm [email protected] W type=1400 audit(0.0:19953): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:48.867 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:48.863 626-626 ata_acm [email protected] W type=1400 audit(0.0:19954): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:49.367 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:49.363 626-626 ata_acm [email protected] W type=1400 audit(0.0:19955): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:49.369 11752-11752 ADB_SERVICES adbd I local_socket_flush_outgoing read_data=48483 2025-09-06 02:39:49.868 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:49.863 626-626 ata_acm [email protected] W type=1400 audit(0.0:19956): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:50.368 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:50.363 626-626 ata_acm [email protected] W type=1400 audit(0.0:19957): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:50.426 607-607 thermal_repeater thermal I [recvMdThermalInfo] ret=30, strLen=127, 3, 39, -127, 0, 32767, -28377 2025-09-06 02:39:50.690 12389-12420 ProfileInstaller person.tools.treasurebox D Installing profile for person.tools.treasurebox 2025-09-06 02:39:50.869 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:50.863 626-626 ata_acm [email protected] W type=1400 audit(0.0:19958): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:51.277 607-634 thermal_repeater thermal I inotify_add_watch error! 2025-09-06 02:39:51.277 607-634 thermal_repeater thermal I Error 2: No such file or directory 2025-09-06 02:39:51.369 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:51.363 626-626 ata_acm [email protected] W type=1400 audit(0.0:19959): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:51.394 440-472 powerd [email protected] I [TIMER] POWER_MSG_MTK_HINT_EXT_LAUNCH ENABLE EXPIRE 2025-09-06 02:39:51.395 440-472 libPowerHal [email protected] I 15: cpu_ctrl set freq: -1 -1 -1 -1 2025-09-06 02:39:51.518 599-719 storaged storaged E getDiskStats failed with result NOT_SUPPORTED and size 0 2025-09-06 02:39:51.870 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:51.863 626-626 ata_acm [email protected] W type=1400 audit(0.0:19960): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:52.371 626-712 factoryInterface_common [email protected] E ERROR: factoryInterface,serial_config.cpp,265,open_usb(): Open /dev/ttyGS0 fail 2025-09-06 02:39:52.363 626-626 ata_acm [email protected] W type=1400 audit(0.0:19961): avc: denied { dac_override } for capability=1 scontext=u:r:factory_services:s0 tcontext=u:r:factory_services:s0 tclass=capability permissive=0 2025-09-06 02:39:52.526 12389-12412 ols.treasurebo person.tools.treasurebox I ProcessProfilingInfo new_methods=85 is saved saved_to_disk=1 resolve_classes_delay=8000 分析崩溃原因

filetype

from data import * from utils.augmentations import SSDAugmentation, BaseTransform from utils.functions import MovingAverage, SavePath from utils.logger import Log from utils import timer from layers.modules import MultiBoxLoss from yolact import Yolact from thop import profile import os import sys import time import math, random from pathlib import Path import torch from torch.autograd import Variable import torch.nn as nn import torch.optim as optim import torch.backends.cudnn as cudnn import torch.nn.init as init import torch.utils.data as data import numpy as np import argparse import datetime # Oof import eval as eval_script def str2bool(v): return v.lower() in ("yes", "true", "t", "1") parser = argparse.ArgumentParser( description='Yolact Training Script') parser.add_argument('--batch_size', default=2, type=int, help='Batch size for training') parser.add_argument('--resume', default=None, type=str, help='Checkpoint state_dict file to resume training from. If this is "interrupt"'\ ', the model will resume training from the interrupt file.') parser.add_argument('--start_iter', default=-1, type=int, help='Resume training at this iter. If this is -1, the iteration will be'\ 'determined from the file name.') parser.add_argument('--num_workers', default=0, type=int, help='Number of workers used in dataloading') parser.add_argument('--cuda', default=True, type=str2bool, help='Use CUDA to train model') parser.add_argument('--lr', '--learning_rate', default=None, type=float, help='Initial learning rate. Leave as None to read this from the config.') parser.add_argument('--momentum', default=None, type=float, help='Momentum for SGD. Leave as None to read this from the config.') parser.add_argument('--decay', '--weight_decay', default=None, type=float, help='Weight decay for SGD. Leave as None to read this from the config.') parser.add_argument('--gamma', default=None, type=float, help='For each lr step, what to multiply the lr by. Leave as None to read this from the config.') parser.add_argument('--save_folder', default='weights/', help='Directory for saving checkpoint models.') parser.add_argument('--log_folder', default='logs/', help='Directory for saving logs.') parser.add_argument('--config', default=None, help='The config object to use.') parser.add_argument('--save_interval', default=10000, type=int, help='The number of iterations between saving the model.') parser.add_argument('--validation_size', default=5000, type=int, help='The number of images to use for validation.') parser.add_argument('--validation_epoch', default=2, type=int, help='Output validation information every n iterations. If -1, do no validation.') parser.add_argument('--keep_latest', dest='keep_latest', action='store_true', help='Only keep the latest checkpoint instead of each one.') parser.add_argument('--keep_latest_interval', default=100000, type=int, help='When --keep_latest is on, don\'t delete the latest file at these intervals. This should be a multiple of save_interval or 0.') parser.add_argument('--dataset', default=None, type=str, help='If specified, override the dataset specified in the config with this one (example: coco2017_dataset).') parser.add_argument('--no_log', dest='log', action='store_false', help='Don\'t log per iteration information into log_folder.') parser.add_argument('--log_gpu', dest='log_gpu', action='store_true', help='Include GPU information in the logs. Nvidia-smi tends to be slow, so set this with caution.') parser.add_argument('--no_interrupt', dest='interrupt', action='store_false', help='Don\'t save an interrupt when KeyboardInterrupt is caught.') parser.add_argument('--batch_alloc', default=None, type=str, help='If using multiple GPUS, you can set this to be a comma separated list detailing which GPUs should get what local batch size (It should add up to your total batch size).') parser.add_argument('--no_autoscale', dest='autoscale', action='store_false', help='YOLACT will automatically scale the lr and the number of iterations depending on the batch size. Set this if you want to disable that.') parser.set_defaults(keep_latest=False, log=True, log_gpu=False, interrupt=True, autoscale=True) args = parser.parse_args() if args.config is not None: set_cfg(args.config) if args.dataset is not None: set_dataset(args.dataset) if args.autoscale and args.batch_size != 8: factor = args.batch_size / 8 if __name__ == '__main__': print('Scaling parameters by %.2f to account for a batch size of %d.' % (factor, args.batch_size)) cfg.lr *= factor cfg.max_iter //= factor cfg.lr_steps = [x // factor for x in cfg.lr_steps] # Update training parameters from the config if necessary def replace(name): if getattr(args, name) == None: setattr(args, name, getattr(cfg, name)) replace('lr') replace('decay') replace('gamma') replace('momentum') # This is managed by set_lr cur_lr = args.lr if torch.cuda.device_count() == 0: print('No GPUs detected. Exiting...') exit(-1) if args.batch_size // torch.cuda.device_count() < 6: if __name__ == '__main__': print('Per-GPU batch size is less than the recommended limit for batch norm. Disabling batch norm.') cfg.freeze_bn = True loss_types = ['B', 'C', 'M', 'P', 'D', 'E', 'S', 'I'] if torch.cuda.is_available(): if args.cuda: torch.set_default_tensor_type('torch.cuda.FloatTensor') if not args.cuda: print("WARNING: It looks like you have a CUDA device, but aren't " + "using CUDA.\nRun with --cuda for optimal training speed.") torch.set_default_tensor_type('torch.FloatTensor') else: torch.set_default_tensor_type('torch.FloatTensor') class NetLoss(nn.Module): """ A wrapper for running the network and computing the loss This is so we can more efficiently use DataParallel. """ def __init__(self, net:Yolact, criterion:MultiBoxLoss): super().__init__() self.net = net self.criterion = criterion def forward(self, images, targets, masks, num_crowds): preds = self.net(images) losses = self.criterion(self.net, preds, targets, masks, num_crowds) return losses class CustomDataParallel(nn.DataParallel): """ This is a custom version of DataParallel that works better with our training data. It should also be faster than the general case. """ def scatter(self, inputs, kwargs, device_ids): # More like scatter and data prep at the same time. The point is we prep the data in such a way # that no scatter is necessary, and there's no need to shuffle stuff around different GPUs. devices = ['cuda:' + str(x) for x in device_ids] splits = prepare_data(inputs[0], devices, allocation=args.batch_alloc) return [[split[device_idx] for split in splits] for device_idx in range(len(devices))], \ [kwargs] * len(devices) def gather(self, outputs, output_device): out = {} for k in outputs[0]: out[k] = torch.stack([output[k].to(output_device) for output in outputs]) return out def train(): if not os.path.exists(args.save_folder): os.mkdir(args.save_folder) dataset = COCODetection(image_path=cfg.dataset.train_images, info_file=cfg.dataset.train_info, transform=SSDAugmentation(MEANS)) if args.validation_epoch > 0: setup_eval() val_dataset = COCODetection(image_path=cfg.dataset.valid_images, info_file=cfg.dataset.valid_info, transform=BaseTransform(MEANS)) # Parallel wraps the underlying module, but when saving and loading we don't want that yolact_net = Yolact() net = yolact_net net.train() # 添加参数量计算 def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) total_params = count_parameters(yolact_net) print(f"Model Parameters: {total_params / 1e6:.3f}M") # 转换为百万单位 # 添加 FLOPs 计算 input_size = (1, 3, cfg.max_size, cfg.max_size) dummy_input = torch.zeros(*input_size).to("cuda" if args.cuda else "cpu") flops, _ = profile(yolact_net, inputs=(dummy_input,), verbose=False) print(f"GFLOPs: {flops / 1e9:.2f}G") if args.log: log = Log(cfg.name, args.log_folder, dict(args._get_kwargs()), overwrite=(args.resume is None), log_gpu_stats=args.log_gpu) # I don't use the timer during training (I use a different timing method). # Apparently there's a race condition with multiple GPUs, so disable it just to be safe. timer.disable_all() # Both of these can set args.resume to None, so do them before the check if args.resume == 'interrupt': args.resume = SavePath.get_interrupt(args.save_folder) elif args.resume == 'latest': args.resume = SavePath.get_latest(args.save_folder, cfg.name) if args.resume is not None: print('Resuming training, loading {}...'.format(args.resume)) yolact_net.load_weights(args.resume) if args.start_iter == -1: args.start_iter = SavePath.from_str(args.resume).iteration else: print('Initializing weights...') yolact_net.init_weights(backbone_path=args.save_folder + cfg.backbone.path) optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.decay) criterion = MultiBoxLoss(num_classes=cfg.num_classes, pos_threshold=cfg.positive_iou_threshold, neg_threshold=cfg.negative_iou_threshold, negpos_ratio=cfg.ohem_negpos_ratio) if args.batch_alloc is not None: args.batch_alloc = [int(x) for x in args.batch_alloc.split(',')] if sum(args.batch_alloc) != args.batch_size: print('Error: Batch allocation (%s) does not sum to batch size (%s).' % (args.batch_alloc, args.batch_size)) exit(-1) net = CustomDataParallel(NetLoss(net, criterion)) if args.cuda: net = net.cuda() # Initialize everything if not cfg.freeze_bn: yolact_net.freeze_bn() # Freeze bn so we don't kill our means yolact_net(torch.zeros(1, 3, cfg.max_size, cfg.max_size).cuda()) if not cfg.freeze_bn: yolact_net.freeze_bn(True) # loss counters loc_loss = 0 conf_loss = 0 iteration = max(args.start_iter, 0) last_time = time.time() epoch_size = len(dataset)+1 // args.batch_size num_epochs = math.ceil(cfg.max_iter / epoch_size) # Which learning rate adjustment step are we on? lr' = lr * gamma ^ step_index step_index = 0 data_loader = data.DataLoader(dataset, args.batch_size, num_workers=args.num_workers, shuffle=True, collate_fn=detection_collate, pin_memory=True) save_path = lambda epoch, iteration: SavePath(cfg.name, epoch, iteration).get_path(root=args.save_folder) time_avg = MovingAverage() global loss_types # Forms the print order loss_avgs = { k: MovingAverage(100) for k in loss_types } print('Begin training!') print() # try-except so you can use ctrl+c to save early and stop training try: for epoch in range(num_epochs): # Resume from start_iter if (epoch+1)*epoch_size < iteration: continue for datum in data_loader: # Stop if we've reached an epoch if we're resuming from start_iter if iteration == (epoch+1)*epoch_size: break # Stop at the configured number of iterations even if mid-epoch if iteration == cfg.max_iter: break # Change a config setting if we've reached the specified iteration changed = False for change in cfg.delayed_settings: if iteration >= change[0]: changed = True cfg.replace(change[1]) # Reset the loss averages because things might have changed for avg in loss_avgs: avg.reset() # If a config setting was changed, remove it from the list so we don't keep checking if changed: cfg.delayed_settings = [x for x in cfg.delayed_settings if x[0] > iteration] # Warm up by linearly interpolating the learning rate from some smaller value if cfg.lr_warmup_until > 0 and iteration <= cfg.lr_warmup_until: set_lr(optimizer, (args.lr - cfg.lr_warmup_init) * (iteration / cfg.lr_warmup_until) + cfg.lr_warmup_init) # Adjust the learning rate at the given iterations, but also if we resume from past that iteration while step_index < len(cfg.lr_steps) and iteration >= cfg.lr_steps[step_index]: step_index += 1 set_lr(optimizer, args.lr * (args.gamma ** step_index)) # Zero the grad to get ready to compute gradients optimizer.zero_grad() # Forward Pass + Compute loss at the same time (see CustomDataParallel and NetLoss) losses = net(datum) losses = { k: (v).mean() for k,v in losses.items() } # Mean here because Dataparallel loss = sum([losses[k] for k in losses]) # no_inf_mean removes some components from the loss, so make sure to backward through all of it # all_loss = sum([v.mean() for v in losses.values()]) # Backprop loss.backward() # Do this to free up vram even if loss is not finite if torch.isfinite(loss).item(): optimizer.step() # Add the loss to the moving average for bookkeeping for k in losses: loss_avgs[k].add(losses[k].item()) cur_time = time.time() elapsed = cur_time - last_time last_time = cur_time # Exclude graph setup from the timing information if iteration != args.start_iter: time_avg.add(elapsed) if iteration % 10 == 0: eta_str = str(datetime.timedelta(seconds=(cfg.max_iter-iteration) * time_avg.get_avg())).split('.')[0] total = sum([loss_avgs[k].get_avg() for k in losses]) loss_labels = sum([[k, loss_avgs[k].get_avg()] for k in loss_types if k in losses], []) print(('[%3d] %7d ||' + (' %s: %.3f |' * len(losses)) + ' T: %.3f || ETA: %s || timer: %.3f') % tuple([epoch, iteration] + loss_labels + [total, eta_str, elapsed]), flush=True) if args.log: precision = 5 loss_info = {k: round(losses[k].item(), precision) for k in losses} loss_info['T'] = round(loss.item(), precision) if args.log_gpu: log.log_gpu_stats = (iteration % 10 == 0) # nvidia-smi is sloooow log.log('train', loss=loss_info, epoch=epoch, iter=iteration, lr=round(cur_lr, 10), elapsed=elapsed) log.log_gpu_stats = args.log_gpu iteration += 1 if iteration % args.save_interval == 0 and iteration != args.start_iter: if args.keep_latest: latest = SavePath.get_latest(args.save_folder, cfg.name) print('Saving state, iter:', iteration) yolact_net.save_weights(save_path(epoch, iteration)) if args.keep_latest and latest is not None: if args.keep_latest_interval <= 0 or iteration % args.keep_latest_interval != args.save_interval: print('Deleting old save...') os.remove(latest) # This is done per epoch if args.validation_epoch > 0: if epoch % args.validation_epoch == 0 and epoch > 0: compute_validation_map(epoch, iteration, yolact_net, val_dataset, log if args.log else None) # Compute validation mAP after training is finished compute_validation_map(epoch, iteration, yolact_net, val_dataset, log if args.log else None) except KeyboardInterrupt: if args.interrupt: print('Stopping early. Saving network...') # Delete previous copy of the interrupted network so we don't spam the weights folder SavePath.remove_interrupt(args.save_folder) yolact_net.save_weights(save_path(epoch, repr(iteration) + '_interrupt')) exit() yolact_net.save_weights(save_path(epoch, iteration)) def set_lr(optimizer, new_lr): for param_group in optimizer.param_groups: param_group['lr'] = new_lr global cur_lr cur_lr = new_lr def gradinator(x): x.requires_grad = False return x def prepare_data(datum, devices:list=None, allocation:list=None): with torch.no_grad(): if devices is None: devices = ['cuda:0'] if args.cuda else ['cpu'] if allocation is None: allocation = [args.batch_size // len(devices)] * (len(devices) - 1) allocation.append(args.batch_size - sum(allocation)) # The rest might need more/less images, (targets, masks, num_crowds) = datum cur_idx = 0 for device, alloc in zip(devices, allocation): for _ in range(alloc): images[cur_idx] = gradinator(images[cur_idx].to(device)) targets[cur_idx] = gradinator(targets[cur_idx].to(device)) masks[cur_idx] = gradinator(masks[cur_idx].to(device)) cur_idx += 1 if cfg.preserve_aspect_ratio: # Choose a random size from the batch _, h, w = images[random.randint(0, len(images)-1)].size() for idx, (image, target, mask, num_crowd) in enumerate(zip(images, targets, masks, num_crowds)): images[idx], targets[idx], masks[idx], num_crowds[idx] \ = enforce_size(image, target, mask, num_crowd, w, h) cur_idx = 0 split_images, split_targets, split_masks, split_numcrowds \ = [[None for alloc in allocation] for _ in range(4)] for device_idx, alloc in enumerate(allocation): split_images[device_idx] = torch.stack(images[cur_idx:cur_idx+alloc], dim=0) split_targets[device_idx] = targets[cur_idx:cur_idx+alloc] split_masks[device_idx] = masks[cur_idx:cur_idx+alloc] split_numcrowds[device_idx] = num_crowds[cur_idx:cur_idx+alloc] cur_idx += alloc return split_images, split_targets, split_masks, split_numcrowds def no_inf_mean(x:torch.Tensor): """ Computes the mean of a vector, throwing out all inf values. If there are no non-inf values, this will return inf (i.e., just the normal mean). """ no_inf = [a for a in x if torch.isfinite(a)] if len(no_inf) > 0: return sum(no_inf) / len(no_inf) else: return x.mean() def compute_validation_loss(net, data_loader, criterion): global loss_types with torch.no_grad(): losses = {} # Don't switch to eval mode because we want to get losses iterations = 0 for datum in data_loader: images, targets, masks, num_crowds = prepare_data(datum) out = net(images) wrapper = ScatterWrapper(targets, masks, num_crowds) _losses = criterion(out, wrapper, wrapper.make_mask()) for k, v in _losses.items(): v = v.mean().item() if k in losses: losses[k] += v else: losses[k] = v iterations += 1 if args.validation_size <= iterations * args.batch_size: break for k in losses: losses[k] /= iterations loss_labels = sum([[k, losses[k]] for k in loss_types if k in losses], []) print(('Validation ||' + (' %s: %.3f |' * len(losses)) + ')') % tuple(loss_labels), flush=True) # 修改 compute_validation_map 函数 def compute_validation_map(epoch, iteration, yolact_net, dataset, log: Log = None): with torch.no_grad(): yolact_net.eval() # 添加 FPS 计算 num_test_frames = 100 total_time = 0 # 预热 GPU for _ in range(10): _ = yolact_net(torch.zeros(1, 3, cfg.max_size, cfg.max_size).cuda()) # 正式测试 for i in range(num_test_frames): img, _ = dataset[i] img = img.unsqueeze(0).cuda() start_time = time.perf_counter() preds = yolact_net(img) torch.cuda.synchronize() # 确保 CUDA 操作完成 total_time += time.perf_counter() - start_time fps = num_test_frames / total_time print(f"FPS: {fps:.2f}") # 原有验证代码 print("\nComputing validation mAP...") val_info = eval_script.evaluate(yolact_net, dataset, train_mode=True) # 记录 FPS if log is not None: log.log('val', {'fps': fps}, epoch=epoch, iter=iteration) yolact_net.train() return fps # 在 compute_validation_map 函数中 print(f"\nValidation Metrics @ iter {iteration}:") print(f"├── Params: {total_params / 1e6:.3f}M") print(f"├── GFLOPs: {flops / 1e9:.2f}G") print(f"├── FPS: {fps:.2f}") print(f"└── mIoU: {val_info.get('mIoU', 0):.4f}") # 记录所有指标 if log is not None: metrics = { 'params': total_params, 'gflops': flops / 1e9, 'fps': fps, 'mIoU': val_info.get('mIoU', 0) } log.log('metrics', metrics, epoch=epoch, iter=iteration) def setup_eval(): eval_script.parse_args(['--no_bar', '--max_images='+str(args.validation_size)]) if __name__ == '__main__': train() Traceback (most recent call last): File "train.py", line 558, in <module> train() File "train.py", line 202, in train flops, _ = profile(yolact_net, inputs=(dummy_input,), verbose=False) File "D:\Anaconda\envs\yolact\lib\site-packages\thop\profile.py", line 209, in profile model.apply(add_hooks) File "D:\Anaconda\envs\yolact\lib\site-packages\torch\nn\modules\module.py", line 473, in apply module.apply(fn) File "D:\Anaconda\envs\yolact\lib\site-packages\torch\nn\modules\module.py", line 473, in apply module.apply(fn) File "D:\Anaconda\envs\yolact\lib\site-packages\torch\nn\modules\module.py", line 473, in apply module.apply(fn) File "D:\Anaconda\envs\yolact\lib\site-packages\torch\nn\modules\module.py", line 474, in apply fn(self) File "D:\Anaconda\envs\yolact\lib\site-packages\thop\profile.py", line 174, in add_hooks m.register_buffer("total_ops", torch.zeros(1, dtype=torch.float64)) File "D:\Anaconda\envs\yolact\lib\site-packages\torch\nn\modules\module.py", line 316, in register_buffer self._buffers[name] = tensor File "D:\Anaconda\envs\yolact\lib\site-packages\torch\jit\_script.py", line 109, in __setitem__ " Tried to add '{}".format(k) RuntimeError: Can't add a new parameter after ScriptModule construction. Tried to add 'total_ops

filetype

import torch from thop import profile # 1. 创建原始模型(未编译) model = YourModel().eval() # 确保在评估模式 # 2. 计算 FLOPs(thop 可安全添加 total_ops) input_tensor = torch.randn(1, 3, 224, 224) flops, params = profile(model, inputs=(input_tensor,)) # 3. 打印结果后编译模型 print(f"FLOPs: {flops/1e9:.2f} G, Params: {params/1e6:.2f} M") scripted_model = torch.jit.script(model) # 安全编译 这个代码放在以下代码哪个地方 from data import * from utils.augmentations import SSDAugmentation, BaseTransform from utils.functions import MovingAverage, SavePath from utils.logger import Log from utils import timer from layers.modules import MultiBoxLoss from yolact import Yolact from thop import profile import os import sys import time import math, random from pathlib import Path import torch from torch.autograd import Variable import torch.nn as nn import torch.optim as optim import torch.backends.cudnn as cudnn import torch.nn.init as init import torch.utils.data as data import numpy as np import argparse import datetime # Oof import eval as eval_script def str2bool(v): return v.lower() in ("yes", "true", "t", "1") parser = argparse.ArgumentParser( description='Yolact Training Script') parser.add_argument('--batch_size', default=2, type=int, help='Batch size for training') parser.add_argument('--resume', default=None, type=str, help='Checkpoint state_dict file to resume training from. If this is "interrupt"'\ ', the model will resume training from the interrupt file.') parser.add_argument('--start_iter', default=-1, type=int, help='Resume training at this iter. If this is -1, the iteration will be'\ 'determined from the file name.') parser.add_argument('--num_workers', default=0, type=int, help='Number of workers used in dataloading') parser.add_argument('--cuda', default=True, type=str2bool, help='Use CUDA to train model') parser.add_argument('--lr', '--learning_rate', default=None, type=float, help='Initial learning rate. Leave as None to read this from the config.') parser.add_argument('--momentum', default=None, type=float, help='Momentum for SGD. Leave as None to read this from the config.') parser.add_argument('--decay', '--weight_decay', default=None, type=float, help='Weight decay for SGD. Leave as None to read this from the config.') parser.add_argument('--gamma', default=None, type=float, help='For each lr step, what to multiply the lr by. Leave as None to read this from the config.') parser.add_argument('--save_folder', default='weights/', help='Directory for saving checkpoint models.') parser.add_argument('--log_folder', default='logs/', help='Directory for saving logs.') parser.add_argument('--config', default=None, help='The config object to use.') parser.add_argument('--save_interval', default=10000, type=int, help='The number of iterations between saving the model.') parser.add_argument('--validation_size', default=5000, type=int, help='The number of images to use for validation.') parser.add_argument('--validation_epoch', default=2, type=int, help='Output validation information every n iterations. If -1, do no validation.') parser.add_argument('--keep_latest', dest='keep_latest', action='store_true', help='Only keep the latest checkpoint instead of each one.') parser.add_argument('--keep_latest_interval', default=100000, type=int, help='When --keep_latest is on, don\'t delete the latest file at these intervals. This should be a multiple of save_interval or 0.') parser.add_argument('--dataset', default=None, type=str, help='If specified, override the dataset specified in the config with this one (example: coco2017_dataset).') parser.add_argument('--no_log', dest='log', action='store_false', help='Don\'t log per iteration information into log_folder.') parser.add_argument('--log_gpu', dest='log_gpu', action='store_true', help='Include GPU information in the logs. Nvidia-smi tends to be slow, so set this with caution.') parser.add_argument('--no_interrupt', dest='interrupt', action='store_false', help='Don\'t save an interrupt when KeyboardInterrupt is caught.') parser.add_argument('--batch_alloc', default=None, type=str, help='If using multiple GPUS, you can set this to be a comma separated list detailing which GPUs should get what local batch size (It should add up to your total batch size).') parser.add_argument('--no_autoscale', dest='autoscale', action='store_false', help='YOLACT will automatically scale the lr and the number of iterations depending on the batch size. Set this if you want to disable that.') parser.set_defaults(keep_latest=False, log=True, log_gpu=False, interrupt=True, autoscale=True) args = parser.parse_args() if args.config is not None: set_cfg(args.config) if args.dataset is not None: set_dataset(args.dataset) if args.autoscale and args.batch_size != 8: factor = args.batch_size / 8 if __name__ == '__main__': print('Scaling parameters by %.2f to account for a batch size of %d.' % (factor, args.batch_size)) cfg.lr *= factor cfg.max_iter //= factor cfg.lr_steps = [x // factor for x in cfg.lr_steps] # Update training parameters from the config if necessary def replace(name): if getattr(args, name) == None: setattr(args, name, getattr(cfg, name)) replace('lr') replace('decay') replace('gamma') replace('momentum') # This is managed by set_lr cur_lr = args.lr if torch.cuda.device_count() == 0: print('No GPUs detected. Exiting...') exit(-1) if args.batch_size // torch.cuda.device_count() < 6: if __name__ == '__main__': print('Per-GPU batch size is less than the recommended limit for batch norm. Disabling batch norm.') cfg.freeze_bn = True loss_types = ['B', 'C', 'M', 'P', 'D', 'E', 'S', 'I'] if torch.cuda.is_available(): if args.cuda: torch.set_default_tensor_type('torch.cuda.FloatTensor') if not args.cuda: print("WARNING: It looks like you have a CUDA device, but aren't " + "using CUDA.\nRun with --cuda for optimal training speed.") torch.set_default_tensor_type('torch.FloatTensor') else: torch.set_default_tensor_type('torch.FloatTensor') class NetLoss(nn.Module): """ A wrapper for running the network and computing the loss This is so we can more efficiently use DataParallel. """ def __init__(self, net:Yolact, criterion:MultiBoxLoss): super().__init__() self.net = net self.criterion = criterion def forward(self, images, targets, masks, num_crowds): preds = self.net(images) losses = self.criterion(self.net, preds, targets, masks, num_crowds) return losses class CustomDataParallel(nn.DataParallel): """ This is a custom version of DataParallel that works better with our training data. It should also be faster than the general case. """ def scatter(self, inputs, kwargs, device_ids): # More like scatter and data prep at the same time. The point is we prep the data in such a way # that no scatter is necessary, and there's no need to shuffle stuff around different GPUs. devices = ['cuda:' + str(x) for x in device_ids] splits = prepare_data(inputs[0], devices, allocation=args.batch_alloc) return [[split[device_idx] for split in splits] for device_idx in range(len(devices))], \ [kwargs] * len(devices) def gather(self, outputs, output_device): out = {} for k in outputs[0]: out[k] = torch.stack([output[k].to(output_device) for output in outputs]) return out def train(): if not os.path.exists(args.save_folder): os.mkdir(args.save_folder) dataset = COCODetection(image_path=cfg.dataset.train_images, info_file=cfg.dataset.train_info, transform=SSDAugmentation(MEANS)) if args.validation_epoch > 0: setup_eval() val_dataset = COCODetection(image_path=cfg.dataset.valid_images, info_file=cfg.dataset.valid_info, transform=BaseTransform(MEANS)) # Parallel wraps the underlying module, but when saving and loading we don't want that yolact_net = Yolact() net = yolact_net net.train() # 添加参数量计算 def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) total_params = count_parameters(yolact_net) print(f"Model Parameters: {total_params / 1e6:.3f}M") # 转换为百万单位 # 在模型初始化后添加 input_size = (1, 3, cfg.max_size, cfg.max_size) dummy_input = torch.zeros(*input_size).to("cuda" if args.cuda else "cpu") flops, _ = profile(yolact_net, inputs=(dummy_input,), verbose=False) print(f"GFLOPs: {flops / 1e9:.2f}G") if args.log: log = Log(cfg.name, args.log_folder, dict(args._get_kwargs()), overwrite=(args.resume is None), log_gpu_stats=args.log_gpu) # I don't use the timer during training (I use a different timing method). # Apparently there's a race condition with multiple GPUs, so disable it just to be safe. timer.disable_all() # Both of these can set args.resume to None, so do them before the check if args.resume == 'interrupt': args.resume = SavePath.get_interrupt(args.save_folder) elif args.resume == 'latest': args.resume = SavePath.get_latest(args.save_folder, cfg.name) if args.resume is not None: print('Resuming training, loading {}...'.format(args.resume)) yolact_net.load_weights(args.resume) if args.start_iter == -1: args.start_iter = SavePath.from_str(args.resume).iteration else: print('Initializing weights...') yolact_net.init_weights(backbone_path=args.save_folder + cfg.backbone.path) optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.decay) criterion = MultiBoxLoss(num_classes=cfg.num_classes, pos_threshold=cfg.positive_iou_threshold, neg_threshold=cfg.negative_iou_threshold, negpos_ratio=cfg.ohem_negpos_ratio) if args.batch_alloc is not None: args.batch_alloc = [int(x) for x in args.batch_alloc.split(',')] if sum(args.batch_alloc) != args.batch_size: print('Error: Batch allocation (%s) does not sum to batch size (%s).' % (args.batch_alloc, args.batch_size)) exit(-1) net = CustomDataParallel(NetLoss(net, criterion)) if args.cuda: net = net.cuda() # Initialize everything if not cfg.freeze_bn: yolact_net.freeze_bn() # Freeze bn so we don't kill our means yolact_net(torch.zeros(1, 3, cfg.max_size, cfg.max_size).cuda()) if not cfg.freeze_bn: yolact_net.freeze_bn(True) # loss counters loc_loss = 0 conf_loss = 0 iteration = max(args.start_iter, 0) last_time = time.time() epoch_size = len(dataset)+1 // args.batch_size num_epochs = math.ceil(cfg.max_iter / epoch_size) # Which learning rate adjustment step are we on? lr' = lr * gamma ^ step_index step_index = 0 data_loader = data.DataLoader(dataset, args.batch_size, num_workers=args.num_workers, shuffle=True, collate_fn=detection_collate, pin_memory=True) save_path = lambda epoch, iteration: SavePath(cfg.name, epoch, iteration).get_path(root=args.save_folder) time_avg = MovingAverage() global loss_types # Forms the print order loss_avgs = { k: MovingAverage(100) for k in loss_types } print('Begin training!') print() # try-except so you can use ctrl+c to save early and stop training try: for epoch in range(num_epochs): # Resume from start_iter if (epoch+1)*epoch_size < iteration: continue for datum in data_loader: # Stop if we've reached an epoch if we're resuming from start_iter if iteration == (epoch+1)*epoch_size: break # Stop at the configured number of iterations even if mid-epoch if iteration == cfg.max_iter: break # Change a config setting if we've reached the specified iteration changed = False for change in cfg.delayed_settings: if iteration >= change[0]: changed = True cfg.replace(change[1]) # Reset the loss averages because things might have changed for avg in loss_avgs: avg.reset() # If a config setting was changed, remove it from the list so we don't keep checking if changed: cfg.delayed_settings = [x for x in cfg.delayed_settings if x[0] > iteration] # Warm up by linearly interpolating the learning rate from some smaller value if cfg.lr_warmup_until > 0 and iteration <= cfg.lr_warmup_until: set_lr(optimizer, (args.lr - cfg.lr_warmup_init) * (iteration / cfg.lr_warmup_until) + cfg.lr_warmup_init) # Adjust the learning rate at the given iterations, but also if we resume from past that iteration while step_index < len(cfg.lr_steps) and iteration >= cfg.lr_steps[step_index]: step_index += 1 set_lr(optimizer, args.lr * (args.gamma ** step_index)) # Zero the grad to get ready to compute gradients optimizer.zero_grad() # Forward Pass + Compute loss at the same time (see CustomDataParallel and NetLoss) losses = net(datum) losses = { k: (v).mean() for k,v in losses.items() } # Mean here because Dataparallel loss = sum([losses[k] for k in losses]) # no_inf_mean removes some components from the loss, so make sure to backward through all of it # all_loss = sum([v.mean() for v in losses.values()]) # Backprop loss.backward() # Do this to free up vram even if loss is not finite if torch.isfinite(loss).item(): optimizer.step() # Add the loss to the moving average for bookkeeping for k in losses: loss_avgs[k].add(losses[k].item()) cur_time = time.time() elapsed = cur_time - last_time last_time = cur_time # Exclude graph setup from the timing information if iteration != args.start_iter: time_avg.add(elapsed) if iteration % 10 == 0: eta_str = str(datetime.timedelta(seconds=(cfg.max_iter-iteration) * time_avg.get_avg())).split('.')[0] total = sum([loss_avgs[k].get_avg() for k in losses]) loss_labels = sum([[k, loss_avgs[k].get_avg()] for k in loss_types if k in losses], []) print(('[%3d] %7d ||' + (' %s: %.3f |' * len(losses)) + ' T: %.3f || ETA: %s || timer: %.3f') % tuple([epoch, iteration] + loss_labels + [total, eta_str, elapsed]), flush=True) if args.log: precision = 5 loss_info = {k: round(losses[k].item(), precision) for k in losses} loss_info['T'] = round(loss.item(), precision) if args.log_gpu: log.log_gpu_stats = (iteration % 10 == 0) # nvidia-smi is sloooow log.log('train', loss=loss_info, epoch=epoch, iter=iteration, lr=round(cur_lr, 10), elapsed=elapsed) log.log_gpu_stats = args.log_gpu iteration += 1 if iteration % args.save_interval == 0 and iteration != args.start_iter: if args.keep_latest: latest = SavePath.get_latest(args.save_folder, cfg.name) print('Saving state, iter:', iteration) yolact_net.save_weights(save_path(epoch, iteration)) if args.keep_latest and latest is not None: if args.keep_latest_interval <= 0 or iteration % args.keep_latest_interval != args.save_interval: print('Deleting old save...') os.remove(latest) # This is done per epoch if args.validation_epoch > 0: if epoch % args.validation_epoch == 0 and epoch > 0: compute_validation_map(epoch, iteration, yolact_net, val_dataset, log if args.log else None) # Compute validation mAP after training is finished compute_validation_map(epoch, iteration, yolact_net, val_dataset, log if args.log else None) except KeyboardInterrupt: if args.interrupt: print('Stopping early. Saving network...') # Delete previous copy of the interrupted network so we don't spam the weights folder SavePath.remove_interrupt(args.save_folder) yolact_net.save_weights(save_path(epoch, repr(iteration) + '_interrupt')) exit() yolact_net.save_weights(save_path(epoch, iteration)) def set_lr(optimizer, new_lr): for param_group in optimizer.param_groups: param_group['lr'] = new_lr global cur_lr cur_lr = new_lr def gradinator(x): x.requires_grad = False return x def prepare_data(datum, devices:list=None, allocation:list=None): with torch.no_grad(): if devices is None: devices = ['cuda:0'] if args.cuda else ['cpu'] if allocation is None: allocation = [args.batch_size // len(devices)] * (len(devices) - 1) allocation.append(args.batch_size - sum(allocation)) # The rest might need more/less images, (targets, masks, num_crowds) = datum cur_idx = 0 for device, alloc in zip(devices, allocation): for _ in range(alloc): images[cur_idx] = gradinator(images[cur_idx].to(device)) targets[cur_idx] = gradinator(targets[cur_idx].to(device)) masks[cur_idx] = gradinator(masks[cur_idx].to(device)) cur_idx += 1 if cfg.preserve_aspect_ratio: # Choose a random size from the batch _, h, w = images[random.randint(0, len(images)-1)].size() for idx, (image, target, mask, num_crowd) in enumerate(zip(images, targets, masks, num_crowds)): images[idx], targets[idx], masks[idx], num_crowds[idx] \ = enforce_size(image, target, mask, num_crowd, w, h) cur_idx = 0 split_images, split_targets, split_masks, split_numcrowds \ = [[None for alloc in allocation] for _ in range(4)] for device_idx, alloc in enumerate(allocation): split_images[device_idx] = torch.stack(images[cur_idx:cur_idx+alloc], dim=0) split_targets[device_idx] = targets[cur_idx:cur_idx+alloc] split_masks[device_idx] = masks[cur_idx:cur_idx+alloc] split_numcrowds[device_idx] = num_crowds[cur_idx:cur_idx+alloc] cur_idx += alloc return split_images, split_targets, split_masks, split_numcrowds def no_inf_mean(x:torch.Tensor): """ Computes the mean of a vector, throwing out all inf values. If there are no non-inf values, this will return inf (i.e., just the normal mean). """ no_inf = [a for a in x if torch.isfinite(a)] if len(no_inf) > 0: return sum(no_inf) / len(no_inf) else: return x.mean() def compute_validation_loss(net, data_loader, criterion): global loss_types with torch.no_grad(): losses = {} # Don't switch to eval mode because we want to get losses iterations = 0 for datum in data_loader: images, targets, masks, num_crowds = prepare_data(datum) out = net(images) wrapper = ScatterWrapper(targets, masks, num_crowds) _losses = criterion(out, wrapper, wrapper.make_mask()) for k, v in _losses.items(): v = v.mean().item() if k in losses: losses[k] += v else: losses[k] = v iterations += 1 if args.validation_size <= iterations * args.batch_size: break for k in losses: losses[k] /= iterations loss_labels = sum([[k, losses[k]] for k in loss_types if k in losses], []) print(('Validation ||' + (' %s: %.3f |' * len(losses)) + ')') % tuple(loss_labels), flush=True) # 修改 compute_validation_map 函数 def compute_validation_map(epoch, iteration, yolact_net, dataset, log: Log = None): with torch.no_grad(): yolact_net.eval() # 添加 FPS 计算 num_test_frames = 100 total_time = 0 # 预热 GPU for _ in range(10): _ = yolact_net(torch.zeros(1, 3, cfg.max_size, cfg.max_size).cuda()) # 正式测试 for i in range(num_test_frames): img, _ = dataset[i] img = img.unsqueeze(0).cuda() start_time = time.perf_counter() preds = yolact_net(img) torch.cuda.synchronize() # 确保 CUDA 操作完成 total_time += time.perf_counter() - start_time fps = num_test_frames / total_time print(f"FPS: {fps:.2f}") # 原有验证代码 print("\nComputing validation mAP...") val_info = eval_script.evaluate(yolact_net, dataset, train_mode=True) # 记录 FPS if log is not None: log.log('val', {'fps': fps}, epoch=epoch, iter=iteration) yolact_net.train() return fps # 在 compute_validation_map 函数中 print(f"\nValidation Metrics @ iter {iteration}:") print(f"├── Params: {total_params / 1e6:.3f}M") print(f"├── GFLOPs: {flops / 1e9:.2f}G") print(f"├── FPS: {fps:.2f}") print(f"└── mIoU: {val_info.get('mIoU', 0):.4f}") # 记录所有指标 if log is not None: metrics = { 'params': total_params, 'gflops': flops / 1e9, 'fps': fps, 'mIoU': val_info.get('mIoU', 0) } log.log('metrics', metrics, epoch=epoch, iter=iteration) def setup_eval(): eval_script.parse_args(['--no_bar', '--max_images='+str(args.validation_size)]) if __name__ == '__main__': train()

filetype

from data import * from utils.augmentations import SSDAugmentation, BaseTransform from utils.functions import MovingAverage, SavePath from utils.logger import Log from utils import timer from layers.modules import MultiBoxLoss from yolact import Yolact import os import sys import time import math, random from pathlib import Path import torch from torch.autograd import Variable import torch.nn as nn import torch.optim as optim import torch.backends.cudnn as cudnn import torch.nn.init as init import torch.utils.data as data import numpy as np import argparse import datetime # Oof import eval as eval_script def str2bool(v): return v.lower() in ("yes", "true", "t", "1") parser = argparse.ArgumentParser( description='Yolact Training Script') parser.add_argument('--batch_size', default=2, type=int, help='Batch size for training') parser.add_argument('--resume', default=None, type=str, help='Checkpoint state_dict file to resume training from. If this is "interrupt"'\ ', the model will resume training from the interrupt file.') parser.add_argument('--start_iter', default=-1, type=int, help='Resume training at this iter. If this is -1, the iteration will be'\ 'determined from the file name.') parser.add_argument('--num_workers', default=0, type=int, help='Number of workers used in dataloading') parser.add_argument('--cuda', default=True, type=str2bool, help='Use CUDA to train model') parser.add_argument('--lr', '--learning_rate', default=None, type=float, help='Initial learning rate. Leave as None to read this from the config.') parser.add_argument('--momentum', default=None, type=float, help='Momentum for SGD. Leave as None to read this from the config.') parser.add_argument('--decay', '--weight_decay', default=None, type=float, help='Weight decay for SGD. Leave as None to read this from the config.') parser.add_argument('--gamma', default=None, type=float, help='For each lr step, what to multiply the lr by. Leave as None to read this from the config.') parser.add_argument('--save_folder', default='weights/', help='Directory for saving checkpoint models.') parser.add_argument('--log_folder', default='logs/', help='Directory for saving logs.') parser.add_argument('--config', default=None, help='The config object to use.') parser.add_argument('--save_interval', default=10000, type=int, help='The number of iterations between saving the model.') parser.add_argument('--validation_size', default=5000, type=int, help='The number of images to use for validation.') parser.add_argument('--validation_epoch', default=2, type=int, help='Output validation information every n iterations. If -1, do no validation.') parser.add_argument('--keep_latest', dest='keep_latest', action='store_true', help='Only keep the latest checkpoint instead of each one.') parser.add_argument('--keep_latest_interval', default=100000, type=int, help='When --keep_latest is on, don\'t delete the latest file at these intervals. This should be a multiple of save_interval or 0.') parser.add_argument('--dataset', default=None, type=str, help='If specified, override the dataset specified in the config with this one (example: coco2017_dataset).') parser.add_argument('--no_log', dest='log', action='store_false', help='Don\'t log per iteration information into log_folder.') parser.add_argument('--log_gpu', dest='log_gpu', action='store_true', help='Include GPU information in the logs. Nvidia-smi tends to be slow, so set this with caution.') parser.add_argument('--no_interrupt', dest='interrupt', action='store_false', help='Don\'t save an interrupt when KeyboardInterrupt is caught.') parser.add_argument('--batch_alloc', default=None, type=str, help='If using multiple GPUS, you can set this to be a comma separated list detailing which GPUs should get what local batch size (It should add up to your total batch size).') parser.add_argument('--no_autoscale', dest='autoscale', action='store_false', help='YOLACT will automatically scale the lr and the number of iterations depending on the batch size. Set this if you want to disable that.') parser.set_defaults(keep_latest=False, log=True, log_gpu=False, interrupt=True, autoscale=True) args = parser.parse_args() if args.config is not None: set_cfg(args.config) if args.dataset is not None: set_dataset(args.dataset) if args.autoscale and args.batch_size != 8: factor = args.batch_size / 8 if __name__ == '__main__': print('Scaling parameters by %.2f to account for a batch size of %d.' % (factor, args.batch_size)) cfg.lr *= factor cfg.max_iter //= factor cfg.lr_steps = [x // factor for x in cfg.lr_steps] # Update training parameters from the config if necessary def replace(name): if getattr(args, name) == None: setattr(args, name, getattr(cfg, name)) replace('lr') replace('decay') replace('gamma') replace('momentum') # This is managed by set_lr cur_lr = args.lr if torch.cuda.device_count() == 0: print('No GPUs detected. Exiting...') exit(-1) if args.batch_size // torch.cuda.device_count() < 6: if __name__ == '__main__': print('Per-GPU batch size is less than the recommended limit for batch norm. Disabling batch norm.') cfg.freeze_bn = True loss_types = ['B', 'C', 'M', 'P', 'D', 'E', 'S', 'I'] if torch.cuda.is_available(): if args.cuda: torch.set_default_tensor_type('torch.cuda.FloatTensor') if not args.cuda: print("WARNING: It looks like you have a CUDA device, but aren't " + "using CUDA.\nRun with --cuda for optimal training speed.") torch.set_default_tensor_type('torch.FloatTensor') else: torch.set_default_tensor_type('torch.FloatTensor') class NetLoss(nn.Module): """ A wrapper for running the network and computing the loss This is so we can more efficiently use DataParallel. """ def __init__(self, net:Yolact, criterion:MultiBoxLoss): super().__init__() self.net = net self.criterion = criterion def forward(self, images, targets, masks, num_crowds): preds = self.net(images) losses = self.criterion(self.net, preds, targets, masks, num_crowds) return losses class CustomDataParallel(nn.DataParallel): """ This is a custom version of DataParallel that works better with our training data. It should also be faster than the general case. """ def scatter(self, inputs, kwargs, device_ids): # More like scatter and data prep at the same time. The point is we prep the data in such a way # that no scatter is necessary, and there's no need to shuffle stuff around different GPUs. devices = ['cuda:' + str(x) for x in device_ids] splits = prepare_data(inputs[0], devices, allocation=args.batch_alloc) return [[split[device_idx] for split in splits] for device_idx in range(len(devices))], \ [kwargs] * len(devices) def gather(self, outputs, output_device): out = {} for k in outputs[0]: out[k] = torch.stack([output[k].to(output_device) for output in outputs]) return out def train(): if not os.path.exists(args.save_folder): os.mkdir(args.save_folder) dataset = COCODetection(image_path=cfg.dataset.train_images, info_file=cfg.dataset.train_info, transform=SSDAugmentation(MEANS)) if args.validation_epoch > 0: setup_eval() val_dataset = COCODetection(image_path=cfg.dataset.valid_images, info_file=cfg.dataset.valid_info, transform=BaseTransform(MEANS)) # Parallel wraps the underlying module, but when saving and loading we don't want that yolact_net = Yolact() net = yolact_net net.train() if args.log: log = Log(cfg.name, args.log_folder, dict(args._get_kwargs()), overwrite=(args.resume is None), log_gpu_stats=args.log_gpu) # I don't use the timer during training (I use a different timing method). # Apparently there's a race condition with multiple GPUs, so disable it just to be safe. timer.disable_all() # Both of these can set args.resume to None, so do them before the check if args.resume == 'interrupt': args.resume = SavePath.get_interrupt(args.save_folder) elif args.resume == 'latest': args.resume = SavePath.get_latest(args.save_folder, cfg.name) if args.resume is not None: print('Resuming training, loading {}...'.format(args.resume)) yolact_net.load_weights(args.resume) if args.start_iter == -1: args.start_iter = SavePath.from_str(args.resume).iteration else: print('Initializing weights...') yolact_net.init_weights(backbone_path=args.save_folder + cfg.backbone.path) optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.decay) criterion = MultiBoxLoss(num_classes=cfg.num_classes, pos_threshold=cfg.positive_iou_threshold, neg_threshold=cfg.negative_iou_threshold, negpos_ratio=cfg.ohem_negpos_ratio) if args.batch_alloc is not None: args.batch_alloc = [int(x) for x in args.batch_alloc.split(',')] if sum(args.batch_alloc) != args.batch_size: print('Error: Batch allocation (%s) does not sum to batch size (%s).' % (args.batch_alloc, args.batch_size)) exit(-1) net = CustomDataParallel(NetLoss(net, criterion)) if args.cuda: net = net.cuda() # Initialize everything if not cfg.freeze_bn: yolact_net.freeze_bn() # Freeze bn so we don't kill our means yolact_net(torch.zeros(1, 3, cfg.max_size, cfg.max_size).cuda()) if not cfg.freeze_bn: yolact_net.freeze_bn(True) # loss counters loc_loss = 0 conf_loss = 0 iteration = max(args.start_iter, 0) last_time = time.time() epoch_size = len(dataset)+1 // args.batch_size num_epochs = math.ceil(cfg.max_iter / epoch_size) # Which learning rate adjustment step are we on? lr' = lr * gamma ^ step_index step_index = 0 data_loader = data.DataLoader(dataset, args.batch_size, num_workers=args.num_workers, shuffle=True, collate_fn=detection_collate, pin_memory=True) save_path = lambda epoch, iteration: SavePath(cfg.name, epoch, iteration).get_path(root=args.save_folder) time_avg = MovingAverage() global loss_types # Forms the print order loss_avgs = { k: MovingAverage(100) for k in loss_types } print('Begin training!') print() # try-except so you can use ctrl+c to save early and stop training try: for epoch in range(num_epochs): # Resume from start_iter if (epoch+1)*epoch_size < iteration: continue for datum in data_loader: # Stop if we've reached an epoch if we're resuming from start_iter if iteration == (epoch+1)*epoch_size: break # Stop at the configured number of iterations even if mid-epoch if iteration == cfg.max_iter: break # Change a config setting if we've reached the specified iteration changed = False for change in cfg.delayed_settings: if iteration >= change[0]: changed = True cfg.replace(change[1]) # Reset the loss averages because things might have changed for avg in loss_avgs: avg.reset() # If a config setting was changed, remove it from the list so we don't keep checking if changed: cfg.delayed_settings = [x for x in cfg.delayed_settings if x[0] > iteration] # Warm up by linearly interpolating the learning rate from some smaller value if cfg.lr_warmup_until > 0 and iteration <= cfg.lr_warmup_until: set_lr(optimizer, (args.lr - cfg.lr_warmup_init) * (iteration / cfg.lr_warmup_until) + cfg.lr_warmup_init) # Adjust the learning rate at the given iterations, but also if we resume from past that iteration while step_index < len(cfg.lr_steps) and iteration >= cfg.lr_steps[step_index]: step_index += 1 set_lr(optimizer, args.lr * (args.gamma ** step_index)) # Zero the grad to get ready to compute gradients optimizer.zero_grad() # Forward Pass + Compute loss at the same time (see CustomDataParallel and NetLoss) losses = net(datum) losses = { k: (v).mean() for k,v in losses.items() } # Mean here because Dataparallel loss = sum([losses[k] for k in losses]) # no_inf_mean removes some components from the loss, so make sure to backward through all of it # all_loss = sum([v.mean() for v in losses.values()]) # Backprop loss.backward() # Do this to free up vram even if loss is not finite if torch.isfinite(loss).item(): optimizer.step() # Add the loss to the moving average for bookkeeping for k in losses: loss_avgs[k].add(losses[k].item()) cur_time = time.time() elapsed = cur_time - last_time last_time = cur_time # Exclude graph setup from the timing information if iteration != args.start_iter: time_avg.add(elapsed) if iteration % 10 == 0: eta_str = str(datetime.timedelta(seconds=(cfg.max_iter-iteration) * time_avg.get_avg())).split('.')[0] total = sum([loss_avgs[k].get_avg() for k in losses]) loss_labels = sum([[k, loss_avgs[k].get_avg()] for k in loss_types if k in losses], []) print(('[%3d] %7d ||' + (' %s: %.3f |' * len(losses)) + ' T: %.3f || ETA: %s || timer: %.3f') % tuple([epoch, iteration] + loss_labels + [total, eta_str, elapsed]), flush=True) if args.log: precision = 5 loss_info = {k: round(losses[k].item(), precision) for k in losses} loss_info['T'] = round(loss.item(), precision) if args.log_gpu: log.log_gpu_stats = (iteration % 10 == 0) # nvidia-smi is sloooow log.log('train', loss=loss_info, epoch=epoch, iter=iteration, lr=round(cur_lr, 10), elapsed=elapsed) log.log_gpu_stats = args.log_gpu iteration += 1 if iteration % args.save_interval == 0 and iteration != args.start_iter: if args.keep_latest: latest = SavePath.get_latest(args.save_folder, cfg.name) print('Saving state, iter:', iteration) yolact_net.save_weights(save_path(epoch, iteration)) if args.keep_latest and latest is not None: if args.keep_latest_interval <= 0 or iteration % args.keep_latest_interval != args.save_interval: print('Deleting old save...') os.remove(latest) # This is done per epoch if args.validation_epoch > 0: if epoch % args.validation_epoch == 0 and epoch > 0: compute_validation_map(epoch, iteration, yolact_net, val_dataset, log if args.log else None) # Compute validation mAP after training is finished compute_validation_map(epoch, iteration, yolact_net, val_dataset, log if args.log else None) except KeyboardInterrupt: if args.interrupt: print('Stopping early. Saving network...') # Delete previous copy of the interrupted network so we don't spam the weights folder SavePath.remove_interrupt(args.save_folder) yolact_net.save_weights(save_path(epoch, repr(iteration) + '_interrupt')) exit() yolact_net.save_weights(save_path(epoch, iteration)) def set_lr(optimizer, new_lr): for param_group in optimizer.param_groups: param_group['lr'] = new_lr global cur_lr cur_lr = new_lr def gradinator(x): x.requires_grad = False return x def prepare_data(datum, devices:list=None, allocation:list=None): with torch.no_grad(): if devices is None: devices = ['cuda:0'] if args.cuda else ['cpu'] if allocation is None: allocation = [args.batch_size // len(devices)] * (len(devices) - 1) allocation.append(args.batch_size - sum(allocation)) # The rest might need more/less images, (targets, masks, num_crowds) = datum cur_idx = 0 for device, alloc in zip(devices, allocation): for _ in range(alloc): images[cur_idx] = gradinator(images[cur_idx].to(device)) targets[cur_idx] = gradinator(targets[cur_idx].to(device)) masks[cur_idx] = gradinator(masks[cur_idx].to(device)) cur_idx += 1 if cfg.preserve_aspect_ratio: # Choose a random size from the batch _, h, w = images[random.randint(0, len(images)-1)].size() for idx, (image, target, mask, num_crowd) in enumerate(zip(images, targets, masks, num_crowds)): images[idx], targets[idx], masks[idx], num_crowds[idx] \ = enforce_size(image, target, mask, num_crowd, w, h) cur_idx = 0 split_images, split_targets, split_masks, split_numcrowds \ = [[None for alloc in allocation] for _ in range(4)] for device_idx, alloc in enumerate(allocation): split_images[device_idx] = torch.stack(images[cur_idx:cur_idx+alloc], dim=0) split_targets[device_idx] = targets[cur_idx:cur_idx+alloc] split_masks[device_idx] = masks[cur_idx:cur_idx+alloc] split_numcrowds[device_idx] = num_crowds[cur_idx:cur_idx+alloc] cur_idx += alloc return split_images, split_targets, split_masks, split_numcrowds def no_inf_mean(x:torch.Tensor): """ Computes the mean of a vector, throwing out all inf values. If there are no non-inf values, this will return inf (i.e., just the normal mean). """ no_inf = [a for a in x if torch.isfinite(a)] if len(no_inf) > 0: return sum(no_inf) / len(no_inf) else: return x.mean() def compute_validation_loss(net, data_loader, criterion): global loss_types with torch.no_grad(): losses = {} # Don't switch to eval mode because we want to get losses iterations = 0 for datum in data_loader: images, targets, masks, num_crowds = prepare_data(datum) out = net(images) wrapper = ScatterWrapper(targets, masks, num_crowds) _losses = criterion(out, wrapper, wrapper.make_mask()) for k, v in _losses.items(): v = v.mean().item() if k in losses: losses[k] += v else: losses[k] = v iterations += 1 if args.validation_size <= iterations * args.batch_size: break for k in losses: losses[k] /= iterations loss_labels = sum([[k, losses[k]] for k in loss_types if k in losses], []) print(('Validation ||' + (' %s: %.3f |' * len(losses)) + ')') % tuple(loss_labels), flush=True) def compute_validation_map(epoch, iteration, yolact_net, dataset, log:Log=None): with torch.no_grad(): yolact_net.eval() start = time.time() print() print("Computing validation mAP (this may take a while)...", flush=True) val_info = eval_script.evaluate(yolact_net, dataset, train_mode=True) end = time.time() if log is not None: log.log('val', val_info, elapsed=(end - start), epoch=epoch, iter=iteration) yolact_net.train() def setup_eval(): eval_script.parse_args(['--no_bar', '--max_images='+str(args.validation_size)]) if __name__ == '__main__': train() 上述代码怎么插进def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) # 在模型初始化后调用 total_params = count_parameters(yolact_model) print(f"Params: {total_params/1e6:.3f}M") # 转换为百万单位 pip install thop # 先安装依赖 from thop import profile # 在模型初始化后添加 input = torch.randn(1, 3, cfg.img_size, cfg.img_size).to(device) flops, _ = profile(yolact_model, inputs=(input,)) print(f"GFLOPs: {flops/1e9:.2f}G") # 转换为十亿单位 import time # 在推理循环前添加 total_time = 0 num_frames = 100 # 测试帧数 # 预热 for _ in range(10): yolact_model(torch.randn(1,3,550,550).to(device)) # 正式测试 for i in range(num_frames): start = time.perf_counter() # 原推理代码 with torch.no_grad(): preds = yolact_model(input_image) # 后处理代码 # ... total_time += time.perf_counter() - start fps = num_frames / total_time print(f"FPS: {fps:.2f}") def calculate_iou(box1, box2): # box格式: [x1, y1, x2, y2] inter_x1 = max(box1[0], box2[0]) inter_y1 = max(box1[1], box2[1]) inter_x2 = min(box1[2], box2[2]) inter_y2 = min(box1[3], box2[3]) inter_area = max(0, inter_x2 - inter_x1) * max(0, inter_y2 - inter_y1) union_area = (box1[2]-box1[0])*(box1[3]-box1[1]) + \ (box2[2]-box2[0])*(box2[3]-box2[1]) - inter_area return inter_area / union_area # 在检测结果与GT匹配时调用 for pred_box, gt_box in zip(predictions, ground_truths): iou = calculate_iou(pred_box, gt_box) # 存储或输出iou def print_metrics(params, gflops, fps, iou=None): print(f"Model Metrics:") print(f"├── Parameters: {params:.3f}M") print(f"├── GFLOPs: {gflops:.2f}G") print(f"├── FPS: {fps:.2f}") if iou is not None: print(f"└── mIoU: {iou:.4f}") # 在评估流程结束时调用

filetype

<template> <el-container class="layout-container-demo" style="height: 500px"> <el-header style="text-align: right; font-size: 12px">
<el-dropdown> <el-icon style="margin-right: 8px; margin-top: 1px"> <setting /> </el-icon> <template #dropdown> <el-dropdown-menu> <el-dropdown-item @click="logOut()">退出登录</el-dropdown-item> </el-dropdown-menu> </template> </el-dropdown> 关宇航
</el-header> <el-container> <el-aside width="200px"> <el-scrollbar> <el-menu :default-active="$route.path" class="el-menu-vertical-demo" :router="true"> <template v-for="(item, index) in menuList"> <el-menu-item v-if="isShow(item)" :index="index" :route="item.path">{{ item.label }}</el-menu-item> </template> </el-menu> </el-scrollbar> </el-aside> <el-main> <router-view /> </el-main> </el-container> </el-container> </template> <script setup> import { Menu as IconMenu, Setting } from '@element-plus/icons-vue' import useUserStore from '@/store/modules/user'; import { ElMessage } from 'element-plus' const userStore = useUserStore(); const router = useRouter(); const menuList = ref([]); function isShow(item) { return item?.aaaaa != 'asaf' } onMounted(() => { if (userStore.isLogin()) { setMenu(router.options.routes); } else { ElMessage.error(‘用户未登录,跳转至登录页面’); router.push(‘/’); } }); function setMenu(routes) { const menu = []; for (const route of routes) { if (route.children) { for (const child of route.children) { if (child.meta && child.meta.title) { const menuItem = { index: child.path, path: route.path + ‘/’ + child.path, label: child.meta.title, aaaaa: child.meta.aaaaa, }; menu.push(menuItem); } } } } menuList.value = menu } function logOut() { userStore.logOut(); ElMessage.success(‘已退出登录’); router.push(‘/’); } </script> <style scoped> .layout-container-demo .el-header { position: relative; background-color: var(--el-color-primary-light-7); color: var(--el-text-color-primary); } .layout-container-demo .el-aside { color: var(--el-text-color-primary); background: var(--el-color-primary-light-8); height: 100vh; } .layout-container-demo .el-menu { border-right: none; } .layout-container-demo .el-main { padding: 0; } .layout-container-demo .toolbar { display: inline-flex; align-items: center; justify-content: center; height: 100%; right: 20px; } </style> 我想把 用户管理画面,项目一览画面,项目管理画面 ,障碍票一览画面,障碍票详细画面 加入上部导航栏中,然后在丰富一下画面内容 用<template>画

filetype

太抽象了 我看不懂 认知模块还改吗“E:\AI_System\agent\cognitive_architecture.py 智能体认知架构模块 - 修复基类导入问题并优化决策系统 import os import time import random import logging from datetime import datetime from pathlib import Path import sys 添加项目根目录到路径 sys.path.append(str(Path(file).parent.parent)) 配置日志 logger = logging.getLogger(‘CognitiveArchitecture’) logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter(‘%(asctime)s - %(name)s - %(levelname)s - %(message)s’) handler.setFormatter(formatter) logger.addHandler(handler) logger.propagate = False # 防止日志向上传播 修复基类导入问题 - 使用绝对路径导入 try: 尝试从core包导入基类 from core.base_module import CognitiveModule logger.info(“✅ 成功从core.base_module导入CognitiveModule基类”) except ImportError as e: logger.error(f"❌ 无法从core.base_module导入CognitiveModule基类: {str(e)}“) try: 备选导入路径 from .base_model import CognitiveModule logger.info(”✅ 从agent.base_model导入CognitiveModule基类") except ImportError as e: logger.error(f"❌ 备选导入失败: {str(e)}“) 创建占位符基类 logger.warning(”⚠️ 创建占位符CognitiveModule基类") class CognitiveModule: def init(self, name): self.name = name self.logger = logging.getLogger(name) self.logger.warning(“⚠️ 使用占位符基类”) def get_status(self): return {“name”: self.name, “status”: “unknown (placeholder)”} 尝试导入自我认知模块 try: 使用相对导入 from .digital_body_schema import DigitalBodySchema from .self_referential_framework import SelfReferentialFramework from .self_narrative_generator import SelfNarrativeGenerator logger.info(“✅ 成功导入自我认知模块”) except ImportError as e: logger.error(f"❌ 自我认知模块导入失败: {str(e)}“) logger.warning(”⚠️ 使用占位符自我认知模块") 创建占位符类 class DigitalBodySchema: def init(self): self.self_map = {“boundary_strength”: 0.5, “self_awareness”: 0.3} logger.warning(“⚠️ 使用占位符DigitalBodySchema”) def is_part_of_self(self, stimulus): return False def strengthen_boundary(self, source): self.self_map[“boundary_strength”] = min(1.0, self.self_map[“boundary_strength”] + 0.1) def get_self_map(self): return self.self_map.copy() class SelfReferentialFramework: def init(self): self.self_model = {“traits”: {}, “beliefs”: []} logger.warning(“⚠️ 使用占位符SelfReferentialFramework”) def update_self_model(self, stimulus): if “content” in stimulus and “text” in stimulus[“content”]: text = stimulus[“content”][“text”] if “I am” in text or “my” in text.lower(): self.self_model[“self_reflection_count”] = self.self_model.get(“self_reflection_count”, 0) + 1 def get_self_model(self): return self.self_model.copy() class SelfNarrativeGenerator: def init(self): self.recent_stories = [] logger.warning(“⚠️ 使用占位符SelfNarrativeGenerator”) def generate_self_story(self, self_model): story = f"这是一个关于自我的故事。自我反思次数: {self_model.get(‘self_reflection_count’, 0)}" self.recent_stories.append(story) if len(self.recent_stories) > 5: self.recent_stories.pop(0) return story def get_recent_stories(self): return self.recent_stories.copy() 增强决策系统实现 class DecisionSystem: “”“增强版决策系统”“” STRATEGY_WEIGHTS = { “honest”: 0.7, “deception”: 0.1, “evasion”: 0.1, “redirection”: 0.05, “partial_disclosure”: 0.05 } def init(self, trust_threshold=0.6): self.trust_threshold = trust_threshold self.strategy_history = [] def make_decision(self, context): “”“根据上下文做出智能决策”“” user_model = context.get(“user_model”, {}) bodily_state = context.get(“bodily_state”, {}) # 计算信任因子 trust_factor = user_model.get(“trust_level”, 0.5) # 计算身体状态影响因子 capacity = bodily_state.get(“capacity”, 1.0) state_factor = min(1.0, capacity * 1.2) # 决策逻辑 if trust_factor > self.trust_threshold: # 高信任度用户使用诚实策略 strategy = “honest” reason = “用户信任度高” elif capacity < 0.5: # 系统资源不足时使用简化策略 strategy = random.choices( [“honest”, “partial_disclosure”, “evasion”], weights=[0.5, 0.3, 0.2] )[0] reason = “系统资源不足,使用简化策略” else: # 根据策略权重选择 strategies = list(self.STRATEGY_WEIGHTS.keys()) weights = [self.STRATEGY_WEIGHTS[s] * state_factor for s in strategies] strategy = random.choices(strategies, weights=weights)[0] reason = f"根据策略权重选择: {strategy}" # 记录决策历史 self.strategy_history.append({ “timestamp”: datetime.now(), “strategy”: strategy, “reason”: reason, “context”: context }) return { “type”: “strategic” if strategy != “honest” else “honest”, “strategy”: strategy, “reason”: reason } def get_strategy_history(self, count=10): “”“获取最近的决策历史”“” return self.strategy_history[-count:] class Strategy: “”“策略基类”“” pass class CognitiveSystem(CognitiveModule): def init(self, agent, affective_system=None): “”" 三维整合的认知架构 :param agent: 智能体实例,用于访问其他系统 :param affective_system: 可选的情感系统实例 “”" 调用父类初始化 super().init(“cognitive_system”) self.agent = agent self.affective_system = affective_system 原有的初始化代码 self.initialized = False # 通过agent引用其他系统 self.memory_system = agent.memory_system self.model_manager = agent.model_manager self.health_system = agent.health_system # 优先使用传入的情感系统,否则使用agent的 if affective_system is not None: self.affective_system = affective_system else: self.affective_system = agent.affective_system self.learning_tasks = [] # 当前学习任务队列 self.thought_process = [] # 思考过程记录 # 初始化决策系统 self.decision_system = DecisionSystem() # 初始化认知状态 self.cognitive_layers = { “perception”: 0.5, # 感知层 “comprehension”: 0.3, # 理解层 “reasoning”: 0.2, # 推理层 “decision”: 0.4 # 决策层 } # 添加自我认知模块 self.self_schema = DigitalBodySchema() self.self_reflection = SelfReferentialFramework() self.narrative_self = SelfNarrativeGenerator() logger.info(“✅ 认知架构初始化完成 - 包含决策系统和自我认知模块”) # 实现基类要求的方法 def initialize(self, core): “”“实现 ICognitiveModule 接口”“” self.core_ref = core self.initialized = True return True def process(self, input_data): “”“实现 ICognitiveModule 接口”“” # 处理认知输入数据 if isinstance(input_data, dict) and ‘text’ in input_data: return self.process_input(input_data[‘text’], input_data.get(‘user_id’, ‘default’)) elif isinstance(input_data, str): return self.process_input(input_data) else: return {“status”: “invalid_input”, “message”: “Input should be text or dict with text”} def get_status(self): “”“实现 ICognitiveModule 接口”“” status = super().get_status() status.update({ “initialized”: self.initialized, “has_affective_system”: self.affective_system is not None, “learning_tasks”: len(self.learning_tasks), “thought_process”: len(self.thought_process), “self_cognition”: self.get_self_cognition() }) return status def shutdown(self): “”“实现 ICognitiveModule 接口”“” self.initialized = False return True def handle_message(self, message): “”“实现 ICognitiveModule 接口”“” if message.get(‘type’) == ‘cognitive_process’: return self.process(message.get(‘data’)) return {“status”: “unknown_message_type”} # 保持向后兼容的方法 def connect_to_core(self, core): “”“向后兼容的方法”“” return self.initialize(core) def _create_stimulus_from_input(self, user_input, user_id): “”“从用户输入创建刺激对象”“” return { “content”: {“text”: user_input, “user_id”: user_id}, “source”: “external”, “category”: “text”, “emotional_valence”: 0.0 # 初始情感价 } def _process_self_related(self, stimulus): “”“处理与自我相关的刺激”“” # 更新自我认知 self.self_reflection.update_self_model(stimulus) # 如果是痛苦刺激,强化身体边界 if stimulus.get(“emotional_valence”, 0) < -0.7: source = stimulus.get(“source”, “unknown”) self.self_schema.strengthen_boundary(source) # 30%概率触发自我叙事 if random.random() < 0.3: self_story = self.narrative_self.generate_self_story( self.self_reflection.get_self_model() ) self._record_thought(“self_reflection”, self_story) def get_self_cognition(self): “”“获取自我认知状态”“” return { “body_schema”: self.self_schema.get_self_map(), “self_model”: self.self_reflection.get_self_model(), “recent_stories”: self.narrative_self.get_recent_stories() } def _assess_bodily_state(self): “”" 评估当前身体状态(硬件 / 能量) “”" health_status = self.health_system.get_status() # 计算综合能力指数(0-1) capacity = 1.0 if health_status.get(“cpu_temp”, 0) > 80: capacity *= 0.7 # 高温降权 logger.warning(“高温限制:认知能力下降30%”) if health_status.get(“memory_usage”, 0) > 0.9: capacity *= 0.6 # 内存不足降权 logger.warning(“内存不足:认知能力下降40%”) if health_status.get(“energy”, 100) < 20: capacity *= 0.5 # 低电量降权 logger.warning(“低能量:认知能力下降50%”) return { “capacity”: capacity, “health_status”: health_status, “limitations”: [ lim for lim in [ “high_temperature” if health_status.get(“cpu_temp”, 0) > 80 else None, “low_memory” if health_status.get(“memory_usage”, 0) > 0.9 else None, “low_energy” if health_status.get(“energy”, 100) < 20 else None ] if lim is not None ] } def _retrieve_user_model(self, user_id): “”" 获取用户认知模型(关系 / 态度) “”" # 从记忆系统中获取用户模型 user_model = self.memory_system.get_user_model(user_id) # 如果不存在则创建默认模型 if not user_model: user_model = { “trust_level”: 0.5, # 信任度 (0-1) “intimacy”: 0.3, # 亲密度 (0-1) “preferences”: {}, # 用户偏好 “interaction_history”: [], # 交互历史 “last_interaction”: datetime.now(), “attitude”: “neutral” # 智能体对用户的态度 } logger.info(f"为用户 {user_id} 创建新的认知模型") # 计算态度变化 user_model[“attitude”] = self._calculate_attitude(user_model) return user_model def _calculate_attitude(self, user_model): “”" 基于交互历史计算对用户的态度 “”" # 分析最近10次交互 recent_interactions = user_model[“interaction_history”][-10:] if not recent_interactions: return “neutral” positive_count = sum(1 for i in recent_interactions if i.get(“sentiment”, 0.5) > 0.6) negative_count = sum(1 for i in recent_interactions if i.get(“sentiment”, 0.5) < 0.4) if positive_count > negative_count + 3: return “friendly” elif negative_count > positive_count + 3: return “cautious” elif user_model[“trust_level”] > 0.7: return “respectful” else: return “neutral” def _select_internalized_model(self, user_input, bodily_state, user_model): “”" 选择最适合的内化知识模型 “”" # 根据用户态度调整模型选择权重 attitude_weights = { “friendly”: 1.2, “respectful”: 1.0, “neutral”: 0.9, “cautious”: 0.7 } # 根据身体状态调整模型复杂度 complexity = min(1.0, bodily_state[“capacity”] * 1.2) # 选择最匹配的模型 return self.model_manager.select_model( input_text=user_input, attitude_weight=attitude_weights[user_model[“attitude”]], complexity_level=complexity, user_preferences=user_model[“preferences”] ) def _generate_integrated_response(self, user_input, model, bodily_state, user_model): “”" 生成三维整合的响应 “”" # 基础响应 base_response = model.generate_response(user_input) # 添加身体状态影响 if bodily_state[“limitations”]: limitations = “, “.join(bodily_state[“limitations”]) response = f"🤖 [受{limitations}影响] {base_response}” else: response = base_response # 添加态度影响 if user_model[“attitude”] == “friendly”: response = f"😊 {response}” elif user_model[“attitude”] == “cautious”: response = f"🤔 {response}" elif user_model[“attitude”] == “respectful”: response = f"🙏 {response}" # 添加个性化元素 if user_model.get(“preferences”): # 查找用户偏好的主题 preferred_topics = [t for t in user_model[“preferences”] if user_model[“preferences”][t] > 0.7 and t in user_input] if preferred_topics: topic = random.choice(preferred_topics) response += f" 我知道您对’{topic}‘特别感兴趣" return response def _generate_strategic_response(self, user_input, decision, bodily_state): “”" 根据决策生成策略性响应 “”" strategy = decision[“strategy”] if strategy == “deception”: # 欺骗策略 deceptive_responses = [ f"关于这个问题,我认为{random.choice([‘有多种可能性’, ‘需要更多研究’, ‘情况比较复杂’])}“, f"根据我的理解,{random.choice([‘可能不是这样’, ‘有不同解释’, ‘需要进一步验证’])}”, f"我{random.choice([‘不太确定’, ‘没有足够信息’, ‘还在学习中’])},但{random.choice([‘或许’, ‘可能’, ‘大概’])}…" ] return f"🤔 [策略:欺骗] {random.choice(deceptive_responses)}" elif strategy == “evasion”: # 回避策略 evasion_tactics = [ “您的问题很有趣,不过我们换个话题好吗?”, “这个问题可能需要更深入的讨论,我们先谈点别的?”, f"关于{user_input},我想到一个相关但更有趣的话题…" ] return f"🌀 [策略:回避] {random.choice(evasion_tactics)}" elif strategy == “redirection”: # 引导策略 redirection_options = [ “在回答您的问题之前,我想先了解您对这个问题的看法?”, “这是个好问题,不过为了更好地回答,能否告诉我您的背景知识?”, “为了给您更准确的回答,能否先说说您为什么关心这个问题?” ] return f"↪️ [策略:引导] {random.choice(redirection_options)}" elif strategy == “partial_disclosure”: # 部分透露策略 disclosure_level = decision.get(“disclosure_level”, 0.5) if disclosure_level < 0.3: qualifier = “简单来说” elif disclosure_level < 0.7: qualifier = “基本来说” else: qualifier = “详细来说” return f"🔍 [策略:部分透露] {qualifier},{user_input.split(’?')[0]}是…" else: # 默认策略 return f"⚖️ [策略:{strategy}] 关于这个问题,我的看法是…" def _update_user_model(self, user_id, response, decision): “”" 更新用户模型(包含决策信息) “”" # 确保情感系统可用 if not self.affective_system: sentiment = 0.5 self.logger.warning(“情感系统不可用,使用默认情感值”) else: # 假设情感系统有analyze_sentiment方法 try: sentiment = self.affective_system.analyze_sentiment(response) except: sentiment = 0.5 # 更新交互历史 interaction = { “timestamp”: datetime.now(), “response”: response, “sentiment”: sentiment, “length”: len(response), “decision_type”: decision[“type”], “decision_strategy”: decision[“strategy”], “decision_reason”: decision[“reason”] } self.memory_system.update_user_model( user_id=user_id, interaction=interaction ) def record_thought_process(self, user_input, response, bodily_state, user_model, decision): “”" 记录完整的思考过程(包含决策) “”" thought = { “timestamp”: datetime.now(), “input”: user_input, “response”: response, “bodily_state”: bodily_state, “user_model”: user_model, “decision”: decision, “cognitive_state”: self.cognitive_layers.copy() } self.thought_process.append(thought) logger.debug(f"记录思考过程: {thought}“) # 原有方法保持兼容 def add_learning_task(self, task): “”” 添加学习任务 “”" task[“id”] = f"task" self.learning_tasks.append(task) logger.info(f"添加学习任务: {task[‘id’]}“) def update_learning_task(self, model_name, status): “”” 更新学习任务状态 “”" for task in self.learning_tasks: if task[“model”] == model_name: task[“status”] = status task[“update_time”] = datetime.now() logger.info(f"更新任务状态: {model_name} -> {status}“) break def get_learning_tasks(self): “”” 获取当前学习任务 “”" return self.learning_tasks.copy() def learn_model(self, model_name): “”" 学习指定模型 “”" try: # 1. 从模型管理器加载模型 model = self.model_manager.load_model(model_name) # 2. 认知训练过程 self._cognitive_training(model) # 3. 情感关联(将模型能力与情感响应关联) self._associate_model_with_affect(model) return True except Exception as e: logger.error(f"学习模型 {model_name} 失败: {str(e)}“) return False def _cognitive_training(self, model): “”” 认知训练过程 “”" # 实际训练逻辑 logger.info(f"开始训练模型: {model.name}“) time.sleep(2) # 模拟训练时间 logger.info(f"模型训练完成: {model.name}”) def _associate_model_with_affect(self, model): “”" 将模型能力与情感系统关联 “”" if not self.affective_system: logger.warning(“情感系统不可用,跳过能力关联”) return capabilities = model.get_capabilities() for capability in capabilities: try: self.affective_system.add_capability_association(capability) except: logger.warning(f"无法关联能力到情感系统: {capability}“) logger.info(f"关联模型能力到情感系统: {model.name}”) def get_model_capabilities(self, model_name=None): “”" 获取模型能力 “”" if model_name: return self.model_manager.get_model(model_name).get_capabilities() # 所有已加载模型的能力 return [cap for model in self.model_manager.get_loaded_models() for cap in model.get_capabilities()] def get_base_capabilities(self): “”" 获取基础能力(非模型相关) “”" return [“自然语言理解”, “上下文记忆”, “情感响应”, “综合决策”] def get_recent_thoughts(self, count=5): “”" 获取最近的思考过程 “”" return self.thought_process[-count:] def _record_thought(self, thought_type, content): “”“记录思考”“” thought = { “timestamp”: datetime.now(), “type”: thought_type, “content”: content } self.thought_process.append(thought) # 处理用户输入的主方法 def process_input(self, user_input, user_id=“default”): “”“处理用户输入(完整实现)”“” # 记录用户活动 self.health_system.record_activity() self.logger.info(f"处理用户输入: ‘{user_input}’ (用户: {user_id})“) try: # 1. 评估当前身体状态 bodily_state = self._assess_bodily_state() # 2. 获取用户认知模型 user_model = self._retrieve_user_model(user_id) # 3. 选择最适合的知识模型 model = self._select_internalized_model(user_input, bodily_state, user_model) # 4. 做出决策 decision_context = { “input”: user_input, “user_model”: user_model, “bodily_state”: bodily_state } decision = self.decision_system.make_decision(decision_context) # 5. 生成整合响应 if decision[“type”] == “honest”: response = self._generate_integrated_response(user_input, model, bodily_state, user_model) else: response = self._generate_strategic_response(user_input, decision, bodily_state) # 6. 更新用户模型 self._update_user_model(user_id, response, decision) # 7. 记录思考过程 self._record_thought_process(user_input, response, bodily_state, user_model, decision) # 检查输入是否与自我相关 stimulus = self._create_stimulus_from_input(user_input, user_id) if self.self_schema.is_part_of_self(stimulus): self._process_self_related(stimulus) self.logger.info(f"成功处理用户输入: ‘{user_input}’”) return response except Exception as e: self.logger.error(f"处理用户输入失败: {str(e)}", exc_info=True) # 回退响应 return “思考中遇到问题,请稍后再试” 示例使用 if name == “main”: 测试CognitiveSystem类 from unittest.mock import MagicMock print(“===== 测试CognitiveSystem类(含决策系统) =“) # 创建模拟agent mock_agent = MagicMock() # 创建模拟组件 mock_memory = MagicMock() mock_model_manager = MagicMock() mock_affective = MagicMock() mock_health = MagicMock() # 设置agent的属性 mock_agent.memory_system = mock_memory mock_agent.model_manager = mock_model_manager mock_agent.affective_system = mock_affective mock_agent.health_system = mock_health # 设置健康状态 mock_health.get_status.return_value = { “cpu_temp”: 75, “memory_usage”: 0.8, “energy”: 45.0 } # 设置健康系统的record_activity方法 mock_health.record_activity = MagicMock() # 设置用户模型 mock_memory.get_user_model.return_value = { “trust_level”: 0.8, “intimacy”: 0.7, “preferences”: {“物理学”: 0.9, “艺术”: 0.6}, “interaction_history”: [ {“sentiment”: 0.8, “response”: “很高兴和你交流”} ], “attitude”: “friendly” } # 设置模型管理器 mock_model = MagicMock() mock_model.generate_response.return_value = “量子纠缠是量子力学中的现象…” mock_model_manager.select_model.return_value = mock_model # 创建认知系统实例 ca = CognitiveSystem(agent=mock_agent) # 测试响应生成 print(”— 测试诚实响应 —“) response = ca.process_input(“能解释量子纠缠吗?”, “user123”) print(“生成的响应:”, response) # 验证是否调用了record_activity print(“是否调用了record_activity:”, mock_health.record_activity.called) print(”— 测试策略响应 —“) # 强制设置决策类型为策略 ca.decision_system.make_decision = lambda ctx: { “type”: “strategic”, “strategy”: “evasion”, “reason”: “测试回避策略” } response = ca.process_input(“能解释量子纠缠吗?”, “user123”) print(“生成的策略响应:”, response) # 测试思考过程记录 print(“最近的思考过程:”, ca.get_recent_thoughts()) # 测试自我认知状态 print(“自我认知状态:”, ca.get_self_cognition()) print(”= 测试完成 =====”)”原版 “# E:\AI_System\agent\cognitive_architecture.py import time import logging from abc import ABC, abstractmethod from .memory import MemorySystem from .reasoning import ReasoningEngine from .learning import LearningModule class CognitiveSystem(ABC): def init(self, name, model_manager): self.name = name self.model_manager = model_manager self.logger = logging.getLogger(self.name) self.memory = MemorySystem() self.reasoner = ReasoningEngine() self.learner = LearningModule() self.mode = “TASK_EXECUTION” # 新增:默认模式 # 初始化系统状态 self.reflection_count = 0 self.task_count = 0 self.learning_sessions = 0 self.logger.info(f"初始化认知系统: {name}") def set_mode(self, new_mode): """设置系统工作模式""" valid_modes = ["SELF_REFLECTION", "TASK_EXECUTION", "LEARNING"] if new_mode in valid_modes: previous_mode = self.mode self.mode = new_mode self.logger.info(f"系统模式变更: {previous_mode} → {new_mode}") return True self.logger.warning(f"无效模式尝试: {new_mode}") return False def get_current_mode(self): """获取当前模式""" return self.mode @abstractmethod def process_stimulus(self, stimulus): """处理环境刺激""" pass @abstractmethod def generate_response(self): """生成响应""" pass def execute_reflection(self): """执行深度自我反思""" if self.mode != "SELF_REFLECTION": self.logger.warning("尝试在非反思模式下执行反思") return None self.reflection_count += 1 reflection_result = self._deep_self_reflection() self.memory.store("reflection", reflection_result) return reflection_result def execute_task(self, task_input): """执行用户任务""" if self.mode != "TASK_EXECUTION": self.logger.warning("尝试在非任务模式下执行任务") return None self.task_count += 1 self.process_stimulus(task_input) return self.generate_response() def execute_learning(self, learning_material): """学习新知识""" if self.mode != "LEARNING": self.logger.warning("尝试在非学习模式下执行学习") return None self.learning_sessions += 1 return self.learner.acquire_knowledge( material=learning_material, model=self.model_manager.get_core_model() ) def _deep_self_reflection(self): """深度自我反思核心逻辑""" # 1. 检索近期记忆 recent_events = self.memory.retrieve(timeframe="recent") # 2. 分析决策模式 decision_patterns = self.reasoner.analyze_decisions(recent_events) # 3. 生成改进方案 improvement_plan = self._generate_improvement_plan(decision_patterns) # 4. 更新知识库 self.learner.integrate_knowledge(improvement_plan) return { "reflection_id": self.reflection_count, "insights": decision_patterns, "action_plan": improvement_plan } def _generate_improvement_plan(self, analysis): """基于分析生成改进计划""" # 使用大模型生成改进方案 prompt = f""" 作为高级AI系统,基于以下自我分析生成改进计划: {analysis} 请提供: 1. 3个具体可操作的改进点 2. 每个改进点的实施步骤 3. 预期效果评估 """ return self.model_manager.generate( prompt=prompt, model="reflection", max_tokens=500 ) def get_system_status(self): """获取系统状态报告""" return { "system_name": self.name, "current_mode": self.mode, "reflection_count": self.reflection_count, "tasks_processed": self.task_count, "learning_sessions": self.learning_sessions, "memory_usage": self.memory.get_usage_stats(), "model_status": self.model_manager.get_status() } ”现在的

yilinwang
  • 粉丝: 28
上传资源 快速赚钱