活动介绍

There is no Action mapped for namespace [/] and action name [HelloAction!execute] associated with context path [/test1_war_exploded]

时间: 2025-06-22 19:21:41 浏览: 26
<think>好的,用户遇到了一个Struts配置问题,错误提示是“There is no Action mapped for namespace / and action name HelloAction!execute in context path /test1_war_exploded”。我需要先分析可能的原因,然后逐步给出解决方案。 首先,用户提到的错误通常与Struts的配置文件有关。可能的原因包括Action类未正确配置、命名空间设置错误、方法名未指定或URL模式不匹配。我需要检查这些方面。 根据引用[^1],网上有总结类似的问题。首先,用户需要确认struts.xml中的Action配置是否正确。确保在正确的命名空间下定义了HelloAction,并且指定了正确的方法。例如,如果使用execute方法,需要检查是否在配置中明确声明了method属性,或者在URL中是否正确调用了该方法。 接下来,检查web.xml中的Struts过滤器配置是否正确。URL模式应该是/*或者合适的路径,确保请求能被Struts正确处理。如果配置错误,请求可能无法到达Struts的Action映射,导致错误。 另外,包命名空间是否与请求的URL匹配也很重要。如果Action配置在某个命名空间下,但请求的URL没有包含该命名空间,就会找不到对应的Action。需要确认命名空间是否设置为“/”或者其他正确的路径。 方法调用部分,用户可能没有指定方法名,或者方法名与配置中的不一致。Struts2默认使用execute方法,但如果配置了其他方法,需要在URL中使用“!方法名”或者在配置中指定method属性。 还有,检查类路径是否正确,确保HelloAction类存在于正确的包结构中,并且在配置文件中引用了完整的类名。如果类名写错或者包路径不正确,Struts无法找到对应的Action类,导致映射失败。 最后,建议用户清理并重新部署项目,有时候旧配置可能缓存导致问题。同时,查看服务器日志,看是否有更详细的错误信息,帮助进一步定位问题。 总结下来,解决方案的步骤应该是: 1. 检查struts.xml中的Action配置,确认命名空间、类名和方法名正确。 2. 确保web.xml中的过滤器配置正确,URL模式覆盖请求路径。 3. 验证包结构和类路径是否正确。 4. 清理部署并查看日志获取更多信息。 可能用户的问题出在其中一个或多个步骤中,通过逐一排查可以解决映射错误。</think>### 解决Struts配置中Action未映射的问题 #### 1. 检查`struts.xml`中的Action配置 确保在`struts.xml`中为`HelloAction`定义了正确的映射,格式如下: ```xml <action name="HelloAction" class="com.example.HelloAction" method="execute"> <result>/success.jsp</result> </action> ``` - **命名空间**:检查`<package>`标签的`namespace`属性是否与请求路径匹配。若未定义,默认命名空间为`/`[^1]。 - **类名**:确认`class`属性指向完整的Java类路径(如`com.example.HelloAction`)。 - **方法名**:`method="execute"`需与Action类中的方法名一致。若省略,默认调用`execute()`方法。 #### 2. 验证`web.xml`的Struts过滤器配置 Struts2的核心过滤器需正确配置URL模式: ```xml <filter-mapping> <filter-name>struts2</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> ``` 若配置为`*.action`,需确保请求URL以`.action`结尾(如`/HelloAction!execute.action`)[^1]。 #### 3. 检查请求URL与命名空间匹配 假设Action配置在命名空间`/test`下,则请求URL应为: ``` http://localhost:8080/test1_war_exploded/test/HelloAction!execute ``` 若命名空间为`/`,则直接访问: ``` http://localhost:8080/test1_war_exploded/HelloAction!execute ``` #### 4. 方法调用的两种方式 - **通过URL指定方法**:`HelloAction!execute`(需启用动态方法调用,在`struts.xml`中添加`<constant name="struts.enable.DynamicMethodInvocation" value="true"/>`)。 - **通过配置指定方法**:在`<action>`标签中声明`method="execute"`,直接访问`HelloAction`。 #### 5. 其他常见问题 - **类路径错误**:确保`HelloAction`编译后位于`WEB-INF/classes`对应目录下。 - **JAR包缺失**:检查`WEB-INF/lib`是否包含Struts2核心JAR文件(如`struts2-core-x.x.x.jar`)。 - **缓存问题**:清理服务器(如Tomcat)的`work`目录并重启。 #### 6. 日志分析 查看服务器日志(如Tomcat的`catalina.out`),定位具体错误: ``` SEVERE: Unable to find action class for HelloAction ``` 若出现此类日志,说明类路径或配置有误。 ---
阅读全文

相关推荐

分析下面的代码 :#!/usr/bin/env python # coding: utf-8 # # AIMET Quantization workflow for LLaMA V2 7B with LORA adapters using PEFT pipeline # # This notebook shows a working code example of how to use AIMET to quantize LLaMaV2 model. # # --- # ### Required packages # The notebook assumes AIMET and LLamaV2 related packages are already installed. # In[ ]: # #### Overall flow # This notebook covers the following # 1. Top Level Config # 1. Model Adaptation. # 1. Model Sample Input # 1. Base BERT MHA FP Model Instantiation # 1. Loading LORA Adapter on base Bert MHA Model # 1. Adapted BERT MHA Model Preparation # 1. Adapted BERT MHA Model Quantization with PEFT pipeline # 1. Base KV MHA FP Model Instantiation # 1. Loading LORA Adapter on base KV MHA Model # 1. Adapted KV MHA FP Model Preparation # 1. Create Adapted KVcache MHA Quantsim and Apply Encodings from Adapted BERT MHA Model # 1. Export onnx and encodings # # # #### What this notebook is not # * This notebook is not intended to show the full scope of optimization. For example, the flow will not use QAT, KD-QAT as deliberate choice to have the notebook execute more quickly. # In[2]: get_ipython().run_line_magic('load_ext', 'autoreload') get_ipython().run_line_magic('autoreload', '2') # In[1]: # Install packages only if running in jupyter notebook mode if hasattr(__builtins__,'__IPYTHON__'): get_ipython().system('sudo -H pip install --quiet --upgrade --root-user-action=ignore --no-cache-dir transformers==4.41.2,') get_ipython().system('sudo -H pip install --quiet --upgrade --root-user-action=ignore --no-cache-dir tokenizers==0.19.0,') # !sudo -H pip install --quiet --upgrade --root-user-action=ignore --no-cache-dir transformers==4.27.4, # !sudo -H pip install --quiet --upgrade --root-user-action=ignore --no-cache-dir tokenizers==0.13.0, get_ipython().system('sudo -H pip install --quiet --upgrade --root-user-action=ignore --no-cache-dir peft') get_ipython().system('sudo -H apt-get update') get_ipython().system('sudo -H apt-get install -y libc++-dev') get_ipython().system('sudo -H apt-get install -y clang') # In[1]: import transformers print (transformers.__version__) # In[2]: get_ipython().system('sudo -H pip install --quiet --upgrade --root-user-action=ignore --no-cache-dir transformers==4.41.2,') get_ipython().system('sudo -H pip install --quiet --upgrade --root-user-action=ignore --no-cache-dir tokenizers==0.19.0,') # #### Setup QNN SDK # In[14]: import sys import os QNN_SDK_ROOT = '/tmp/qnn/2.28.0.241029' # QNN 2.28.0 sys.path.insert(0, QNN_SDK_ROOT + '/lib/python') os.environ['LD_LIBRARY_PATH'] = os.path.join(QNN_SDK_ROOT + '/lib/x86_64-linux-clang', os.getenv('LD_LIBRARY_PATH', '')) # In[15]: import qti #checking if correct QNN version is loaded print(qti.__path__) # ### Setting NSP Target # In[3]: sys.path.append(os.path.abspath('../')) sys.path.append(os.path.abspath('../../common')) from utilities.nsptargets import NspTargets # Android GEN4 is supported for this notebook nsp_target = NspTargets.Android.GEN4 # Select quantsim config based on target # Point this to different path if AIMET install in path other than /usr/local/lib/python3.10/dist-packages/ htp_config_file = f'/usr/local/lib/python3.10/dist-packages/aimet_common/quantsim_config/htp_quantsim_config_{nsp_target.dsp_arch}.json' # --- # ## 1. Top Level Config # # In[11]: import os, sys from tqdm import tqdm os.environ['CUDA_VISIBLE_DEVICES'] = '1' import torch from transformers import AutoConfig, AutoTokenizer, default_data_collator cache_dir='./cache_dir' output_dir = './32layer_test' os.makedirs(cache_dir, exist_ok=True) os.makedirs(output_dir, exist_ok=True) device = "cuda" # Auto-regression length: number of tokens to consume and number of logits to produce. ARN=73 # model_id=" or <HF_model_id(meta-llama/Llama-2-7b-hf)>" model_id="/002data/kraus/projects/LLM/qualcomm_llama/model/Step-1/7b_chat/" num_hidden_layers = 32 #configurable to less number for debugging purposes context_length = 4096 # adatper dictionary name to peft_id lora_adapter_dict={'french':'kaitchup/Llama-2-7b-mt-French-to-English', 'oasst': 'kaitchup/Llama-2-7B-oasstguanaco-adapter'} lora_adapter_dict={'french':'french', 'oasst': 'oasst'} # --- # # ## 2. Model Adaptations # The following model adaptation are enabled for inference using provided modeling_llama.py: # * Use 2D attention_mask # * Replace position ids with embedding # * Output new KV only # # The following adaptation is enabled using in place replacement utility function # * Convert linear to conv # In[5]: from transformers.models.llama import modeling_llama from transformers import cache_utils from aimet_utils.linear_to_conv import replace_linears_with_convs from aimet_torch.pro.utils.profiler import event_marker from qcllama_adaptation import ( QcLlamaAttention, bypass_update_causal_mask, DynamicCache_update, DynamicCache_get_seq_length, update_attr ) with event_marker("FP model adaptation configuration"): modeling_llama.LLAMA_ATTENTION_CLASSES['eager'] = QcLlamaAttention # Bypass attention_mask preparation assert update_attr(modeling_llama.LlamaModel, '_update_causal_mask', bypass_update_causal_mask) or \ update_attr(modeling_llama.LlamaModel, '_prepare_decoder_attention_mask', bypass_update_causal_mask), \ f"neither _prepare_decoder_attention_mask(..) nor _update_causal_mask(..) found, Unknown LlamaModel definition in {modeling_llama.__file__}" # Adapting KV$ management assert update_attr(cache_utils.DynamicCache, 'update', DynamicCache_update), f"Unknown DynamicCache definition: {cache_utils.DynamicCache}" assert update_attr(cache_utils.DynamicCache, 'get_seq_length', DynamicCache_get_seq_length), f"Unknown DynamicCache definition: {cache_utils.DynamicCache}" # In[6]: import qcllama_adaptation print(qcllama_adaptation.__file__) # In[7]: from transformers.models.deprecated.open_llama import modeling_open_llama from transformers import cache_utils from aimet_utils.linear_to_conv import replace_linears_with_convs from aimet_torch.pro.utils.profiler import event_marker from qcllama_adaptation import ( QcLlamaAttention, bypass_update_causal_mask, DynamicCache_update, DynamicCache_get_seq_length, update_attr ) with event_marker("FP model adaptation configuration"): modeling_llama.LLAMA_ATTENTION_CLASSES['eager'] = QcLlamaAttention # Bypass attention_mask preparation assert update_attr(modeling_llama.LlamaModel, '_update_causal_mask', bypass_update_causal_mask) or \ update_attr(modeling_llama.LlamaModel, '_prepare_decoder_attention_mask', bypass_update_causal_mask), \ f"neither _prepare_decoder_attention_mask(..) nor _update_causal_mask(..) found, Unknown LlamaModel definition in {modeling_llama.__file__}" # Adapting KV$ management assert update_attr(cache_utils.DynamicCache, 'update', DynamicCache_update), f"Unknown DynamicCache definition: {cache_utils.DynamicCache}" assert update_attr(cache_utils.DynamicCache, 'get_seq_length', DynamicCache_get_seq_length), f"Unknown DynamicCache definition: {cache_utils.DynamicCache}" # --- # ## 3. Model Sample Input # #### Dummy input # # In[8]: from forward_pass_wrapper import get_position_embeddings_from_position_ids, prepare_combined_attention_mask, get_padded_kv_values, flatten_tensors def get_dummy_data(model_mode, num_layers, hidden_size, num_attention_heads, rope_theta, tokenizer, device, separate_tuple_input_output, num_tokens=None, concat_head_in_batch_dimension=False): max_tokens = tokenizer.model_max_length attention_mask = torch.ones((1, max_tokens), dtype=torch.long, device=device) if model_mode == 'bertcache': num_tokens = max_tokens position_ids = torch.cumsum(attention_mask, dim=1) - 1 position_ids = position_ids.clip(0, max_tokens - 1) position_ids = position_ids[..., :num_tokens] position_ids = position_ids.to(device=device) past_kv_length = max_tokens - num_tokens if model_mode == 'kvcache' else 0 attention_mask = prepare_combined_attention_mask(attention_mask, input_shape=(1, num_tokens), past_key_values_length=past_kv_length, device=device, mask_neg=-100) position_ids = get_position_embeddings_from_position_ids(position_ids, head_dim=hidden_size//num_attention_heads, max_length=max_tokens, rope_theta=rope_theta, device=device) inputs = { 'attention_mask': attention_mask, 'position_ids': position_ids, 'input_ids': torch.randint(0, len(tokenizer), (1, num_tokens), device=device) } if model_mode == 'kvcache': inputs['past_key_values'] = get_padded_kv_values(past_size=max_tokens - num_tokens, num_layers=num_layers, hidden_size=hidden_size, concat_head_in_batch_dimension=concat_head_in_batch_dimension, num_attention_heads=num_attention_heads, device=device) if separate_tuple_input_output: flattened_kvcache = tuple(flatten_tensors(inputs['past_key_values'])) inputs = inputs['input_ids'], inputs['attention_mask'], inputs['position_ids'][0], inputs['position_ids'][1] inputs = inputs + flattened_kvcache else: if separate_tuple_input_output: inputs = inputs['input_ids'], inputs['attention_mask'], inputs['position_ids'][0], inputs['position_ids'][1] return inputs # #### Input and Output names # In[9]: def get_input_output_names(num_layers, past_key_values_in, separate_tuple_input_output): def _get_past_key_values_names(sfx, n_layers): all = [] for i in range(n_layers): all.append(f'past_key_{i}_{sfx}') all.append(f'past_value_{i}_{sfx}') return all output_names = ['logits'] input_names = ['input_ids', 'attention_mask'] if separate_tuple_input_output: output_names += _get_past_key_values_names('out', num_layers) input_names += ['position_ids_cos', 'position_ids_sin'] if past_key_values_in: input_names += _get_past_key_values_names('in', num_layers) else: output_names += ['past_key_values'] input_names += ['position_ids'] if past_key_values_in: input_names += ['past_key_values'] return input_names, output_names # ### 4. Base BERT MHA FP Model Instantiation # # In[ ]: # In[12]: llm_config = AutoConfig.from_pretrained(model_id, cache_dir=cache_dir, trust_remote_code=True) # model params llm_config.num_hidden_layers = num_hidden_layers llm_config.cache_dir = cache_dir llm_config.device = torch.device('cpu') # QC LLM model config setattr(llm_config, 'mask_neg', -100) setattr(llm_config, 'num_logits_to_return', 0) setattr(llm_config, 'return_top_k', 0) setattr(llm_config, "use_conv", False) setattr(llm_config, 'return_new_key_value_only', True) setattr(llm_config, 'transposed_key_cache', True) setattr(llm_config, 'use_combined_mask_input', True) setattr(llm_config, 'use_position_embedding_input', True) setattr(llm_config, 'separate_tuple_input_output', False) setattr(llm_config, '_attn_implementation', 'eager') setattr(llm_config, '_attn_implementation_internal', 'eager') print(f'num_layer: {llm_config.num_hidden_layers}, context_length: {context_length}') with event_marker('BERT MHA FP model'): fp_base_model = modeling_llama.LlamaForCausalLM.from_pretrained(model_id, config=llm_config) os.environ['TOKENIZERS_PARALLELISM'] = '0' tokenizer = AutoTokenizer.from_pretrained(model_id, cache_dir=cache_dir, use_fast=True, trust_remote_code=True) ## Adjust the tokenizer to limit to context length tokenizer.model_max_length = context_length # ### 5. Loading LORA Adapters on Base Bert MHA Model # In[13]: # loading adapter to Bert MHA model and save adapter weights from peft import PeftModel,PeftConfig,LoraConfig from aimet_torch.peft import replace_lora_layers_with_quantizable_layers,track_lora_meta_data from lora_utils import save_lora_weights_after_adaptation from aimet_utils.linear_to_conv import replace_linears_with_convs from aimet_utils.linear_to_conv import ConvInplaceLinear import copy # Adding dummy adapter for q_proj, k_proj, v_proj and combined adapters into one adapter k_v_lora_config = LoraConfig( r=16, lora_alpha=16, bias='none', target_modules=["q_proj","k_proj", "v_proj"], init_lora_weights=False # leads to random init not zeros ) for adapter_name,peft_model_id in lora_adapter_dict.items(): model_before_adapter = copy.deepcopy(fp_base_model) print (f"=====loading adapter {adapter_name}====") lora_model = PeftModel.from_pretrained(model_before_adapter, peft_model_id, adapter_name=adapter_name) dummy_adapter_name = "k_v_adapter" lora_model.add_adapter(dummy_adapter_name, k_v_lora_config) for name, param in lora_model.named_parameters(): if dummy_adapter_name in name and "lora" in name: param.data.fill_(0.0) combined_adapter_name = "combined_adapter" lora_model.add_weighted_adapter( adapters=[adapter_name, dummy_adapter_name], weights=[1.0, 1.0], adapter_name=combined_adapter_name, combination_type="linear" ) lora_model.set_adapter(combined_adapter_name) lora_model.delete_adapter(adapter_name) lora_model.delete_adapter(dummy_adapter_name) # Replace lora layers with quantizable layers replace_lora_layers_with_quantizable_layers(lora_model) # Linear to Conv model adaptation lora_model=replace_linears_with_convs(lora_model) # Save adapter weights after adaptation save_lora_weights_after_adaptation(lora_model, output_dir, adapter_name) del model_before_adapter del fp_base_model track_lora_meta_data(lora_model, output_dir, 'meta_data', ConvInplaceLinear) # In[14]: # fill lora layers with 0 to evaluate base model for name, param in lora_model.named_parameters(): if 'lora' in name: param.data.fill_(0.0) # #### 5.1 Adapted BERT MHA Model Evaluation # # In[15]: # defining ppl evaluation function from torch.nn import CrossEntropyLoss bert_mha_fp_model=lora_model.base_model.model def bert_ppl_eval(data_loader, forward_pass_manager, num_batches=0): if num_batches == 0: num_batches = len(data_loader) loss = 0 for batch_id, batch in enumerate(tqdm(data_loader, total=num_batches, desc="Evaluating")): if batch_id >= num_batches: break outputs = forward_pass_manager(**batch) lm_logits = outputs["lm_logits"].cpu() # we can either pass input_ids or input_embeds in our fpm, hence with input_embeds we pass the labels. if 'input_ids' not in batch: batch['input_ids'] = batch['labels'] lm_logits = lm_logits.reshape(batch['input_ids'].shape[0], -1, lm_logits.shape[-1]) shift_logits = lm_logits[..., :-1, :].contiguous() shift_labels = batch['input_ids'][..., 1:].contiguous().to(shift_logits.device) loss_fct = CrossEntropyLoss() loss += loss_fct( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1), ) loss = loss / num_batches ppl = loss.exp() return ppl # In[ ]: from forward_pass_wrapper import LLMForwardPassManager orig_fpm = LLMForwardPassManager(cfg=llm_config, model=bert_mha_fp_model, tokenizer=tokenizer, model_mode='bertcache', num_logits_to_return=0, separate_tuple_input_output=False) input_names, output_names = get_input_output_names(num_layers=llm_config.num_hidden_layers, past_key_values_in=False, separate_tuple_input_output=False) from wikitext_dataloader import get_wiki_dataset train_dataloader, test_dataloader, _ = get_wiki_dataset(context_length, tokenizer, cache_dir, train_batch_size = 1, test_batch_size = 1) with event_marker("BERT MHA FP eval"): with torch.no_grad(): with orig_fpm.place_on_device(device): orig_ppl = bert_ppl_eval(test_dataloader, orig_fpm) print(f"ppl score of original BERT MHA fp model: {orig_ppl}") # ### 6. Adapted BERT MHA Model Preparation # #### Estimated running time: ~ 1h 20m # In[13]: import aimet_torch.pro.ir_graph_op_handler as ir_graph_op_handler from aimet_torch.pro import model_preparer # Setting this flag to False means that the prepared model will be flattened # This flag must be set to false because we rely on the model structure being flat to enable weight sharing ir_graph_op_handler.KEEP_ORIGINAL_MODEL_STRUCTURE = False # configuring the model for BERT mode bert_mha_fp_model.num_logits_to_return = 0 dummy_input = get_dummy_data('bertcache', llm_config.num_hidden_layers, llm_config.hidden_size, llm_config.num_attention_heads, llm_config.rope_theta, tokenizer, 'cpu', separate_tuple_input_output=False) input_names, output_names = get_input_output_names(num_layers=llm_config.num_hidden_layers, past_key_values_in=False, separate_tuple_input_output=True) converter_args_param = ['--input_layout'] converter_args_value = 'NONTRIVIAL' converter_args = [] for input_param in converter_args_param: for input_name in input_names: converter_args += [input_param, input_name, converter_args_value] with event_marker("BERT MHA Model prepare", flush_ram=True): bert_mha_prepared_model = model_preparer.prepare_model(bert_mha_fp_model, dummy_input, filename="bert_mha_prepared_model", path=output_dir, input_names=input_names, output_names=output_names, converter_args=converter_args, skipped_optimizers=['eliminate_common_subexpression','eliminate_nop_with_unit', 'eliminate_duplicate_initializer'], ) # In[ ]: del orig_fpm del bert_mha_fp_model # #### 6.1 Adapted BERT MHA Prepared Model Verification # Verify if prepared BERT model generates the same PPL as FP model # ##### Estimated running time: ~ 3m # In[17]: from aimet_torch.utils import load_pytorch_model # Load prepared model if prepartion is run before and prepared model can be retrived from filer path # # bert_mha_prepared_model = load_pytorch_model(path=output_dir, filename="bert_mha_prepared_model", # # model_name='ConvertedModel', load_state_dict=True) # Calculate ppl score for prepared fp model bert_mha_fpm = LLMForwardPassManager(cfg=llm_config, model=bert_mha_prepared_model, tokenizer=tokenizer, model_mode='bertcache', num_logits_to_return=0, separate_tuple_input_output=True) with event_marker("BERT MHA Prepared FP eval"): with torch.no_grad(): with bert_mha_fpm.place_on_device(device): prepared_bertcache_ppl = bert_ppl_eval(test_dataloader, bert_mha_fpm) print(f"ppl score of BERT prepared fp model: {prepared_bertcache_ppl}\n" f"orig ppl - prepared ppl = {orig_ppl - prepared_bertcache_ppl}") # ### 7. Adapted BERT MHA Model Quantization with PEFT pipeline # # We will be executing PTQ using calibration data that was captured earlier # #### Create Quantsim # In[16]: from aimet_common.defs import QuantScheme from aimet_torch.v2.quantsim import QuantizationSimModel dummy_input = get_dummy_data('bertcache', llm_config.num_hidden_layers, llm_config.hidden_size, llm_config.num_attention_heads, llm_config.rope_theta, tokenizer, device, separate_tuple_input_output=True) with event_marker("create Quantsim", flush_ram=True): with bert_mha_fpm.place_on_device(device): quant_sim = QuantizationSimModel(model=bert_mha_fpm.model, quant_scheme=QuantScheme.post_training_tf, dummy_input=dummy_input, default_output_bw=16, default_param_bw=4, in_place=False, config_file=htp_config_file) quant_sim.model.to('cpu') # In[43]: ### Setting 16*8 matmuls from aimet_torch.v2.experimental.quantsim_utils import set_matmul_second_input_producer_to_8bit_symmetric set_matmul_second_input_producer_to_8bit_symmetric(quant_sim) # #### Manual Mixed Precision # In[44]: from mixed_precision_overrides import ManualQuantsimMixedPrecisionConfig with event_marker("Apply Mixed Precision", flush_ram=True): quantsim_adjuster = ManualQuantsimMixedPrecisionConfig(mixed_precision_config_file= "./config/mixed_precision_profiles/w4_a16_exceptions_llama_v2_prepared_disableRMSNorm_clampgateprojconv_bundledkv.json") quantsim_adjuster.apply_exceptions(quant_sim) # #### Instantiation of PEFT utils # In[30]: import pickle,json from aimet_torch.peft import PeftQuantUtils from aimet_torch.v2.quantization.affine import QuantizeDequantize with open(os.path.join(output_dir,'meta_data.pkl'), "rb") as f: meta_data_file = pickle.load(f) with open(os.path.join(output_dir,'bert_mha_prepared_model.json')) as f: name_to_module_dict = json.load(f) peft_utils = PeftQuantUtils(adapater_name_to_meta_data=meta_data_file, name_to_module_dict=name_to_module_dict) # #### Sequential MSE # ##### Estimated running time: ~ 1h 20m # In[46]: from aimet_torch.v2.seq_mse import apply_seq_mse from aimet_torch.seq_mse import SeqMseParams from aimet_torch.utils import load_pytorch_model # Load prepared model if prepartion is run before and prepared model can be retrived from filer path # # bert_mha_prepared_model = load_pytorch_model(path=output_dir, filename="bert_mha_prepared_model", # # model_name='ConvertedModel', load_state_dict=True) lora_layers =[layer for name,layer in peft_utils.get_fp_lora_layer(bert_mha_prepared_model)] def _forward_fn(model, inputs): prepared_inputs, _ = bert_mha_fpm.prepare_inputs(**inputs) if model == bert_mha_fpm.model else bert_mha_fpm.prepare_inputs(**inputs) model(**prepared_inputs) params = SeqMseParams(num_batches=20, inp_symmetry="symqt", num_candidates=20, loss_fn="mse", forward_fn=_forward_fn) bert_mha_sim_fpm = LLMForwardPassManager(cfg=llm_config, model=quant_sim.model, tokenizer=tokenizer, model_mode='bertcache', num_logits_to_return=0, separate_tuple_input_output=True) with event_marker("SeqMSE"): with bert_mha_fpm.place_on_device("cuda"),bert_mha_sim_fpm.place_on_device("cuda"): apply_seq_mse(bert_mha_fpm.model, quant_sim, train_dataloader, params, modules_to_exclude=lora_layers) quant_sim.save_encodings_to_json(output_dir, 'base_seqmse') # #### Concat Encoding Unification # In[47]: from aimet_torch.v2.experimental import propagate_output_encodings import aimet_torch.elementwise_ops as aimet_ops propagate_output_encodings(quant_sim, aimet_ops.Concat) # #### Setup Lora Layer to 16 bit per tensor # In[48]: ## do this if changing for lora layers for _,module in peft_utils.get_quantized_lora_layer(quant_sim): # setting 16 bit per tensor module.param_quantizers['weight'] = QuantizeDequantize(shape=(1, 1, 1, 1), bitwidth=16, symmetric=True).to(module.weight.device) peft_utils.quantize_lora_scale_with_fixed_range(quant_sim, 16, 0.0, 1.0) peft_utils.disable_lora_adapters(quant_sim) # #### Calibration # ##### Estimated running time: ~ 5m # In[49]: def calibration_wrapper(model, kwargs): data_loader = kwargs['data_loader'] fpm = kwargs['fpm'] max_iterations = kwargs['num_batches'] for batch_id, batch in enumerate(tqdm(data_loader)): if batch_id < max_iterations: prepared_inputs, _ = fpm.prepare_inputs(**batch) model(**prepared_inputs) else: break kwargs = { 'data_loader': train_dataloader, 'fpm': bert_mha_sim_fpm, 'num_batches': 100 } with event_marker("compute encoding for base", flush_ram=True): with bert_mha_sim_fpm.place_on_device(device): quant_sim.compute_encodings(calibration_wrapper, kwargs) from global_encoding_clipper import clamp_activation_encodings clamp_activation_encodings(quant_sim,500) # #### Adapted BERT MHA Quantsim Eval for Quantization Accuracy # ##### Estimated running time: ~7m # In[50]: with event_marker("Sim eval for base"): with torch.no_grad(): with bert_mha_sim_fpm.place_on_device(device): sim_ppl = bert_ppl_eval(test_dataloader, bert_mha_sim_fpm) print(f"ppl score of quantsim model: {sim_ppl}\n" f"orig ppl - quantsim ppl = {orig_ppl - sim_ppl}") quant_sim.save_encodings_to_json(output_dir, 'base_encoding') # #### Load Adapter Weights, Compute Encodings and Save Encodings # ##### Estimated running time: ~ 25m # In[51]: peft_utils.freeze_base_model_param_quantizers(quant_sim) for adapter_name,peft_model_id in lora_adapter_dict.items(): peft_utils.enable_adapter_and_load_weights(quant_sim,os.path.join(output_dir,f'{adapter_name}.safetensor')) with event_marker(f"compute encoding for {adapter_name} adapter", flush_ram=True): with bert_mha_sim_fpm.place_on_device(device): quant_sim.compute_encodings(calibration_wrapper, kwargs) from global_encoding_clipper import clamp_activation_encodings clamp_activation_encodings(quant_sim, 500) with event_marker(f"Sim eval for {adapter_name} adapter"): with torch.no_grad(): with bert_mha_sim_fpm.place_on_device(device): sim_ppl = bert_ppl_eval(test_dataloader, bert_mha_sim_fpm) print(f"ppl score of quantsim model: {sim_ppl}\n" f"orig ppl - quantsim ppl = {orig_ppl - sim_ppl}") ## save encodings for kvcache mode to consume quant_sim.save_encodings_to_json(output_dir, f'{adapter_name}_adapter_encoding') # In[52]: del bert_mha_sim_fpm del bert_mha_fpm del bert_mha_prepared_model del quant_sim # ### 8. Base KV MHA FP Model Instantiation # In[20]: llm_config = AutoConfig.from_pretrained(model_id, cache_dir=cache_dir, trust_remote_code=True) # model params llm_config.num_hidden_layers = num_hidden_layers llm_config.cache_dir = cache_dir llm_config.device = torch.device('cpu') # QC LLM model config setattr(llm_config, 'mask_neg', -100) setattr(llm_config, 'num_logits_to_return', ARN) setattr(llm_config, 'return_top_k', 0) setattr(llm_config, "use_conv", False) setattr(llm_config, 'return_new_key_value_only', True) setattr(llm_config, 'transposed_key_cache', True) setattr(llm_config, 'use_combined_mask_input', True) setattr(llm_config, 'concat_head_in_batch_dimension', False) setattr(llm_config, 'use_sha', False) setattr(llm_config, 'num_tokens', ARN) setattr(llm_config, 'use_position_embedding_input', True) setattr(llm_config, 'separate_tuple_input_output', False) setattr(llm_config, '_attn_implementation', 'eager') setattr(llm_config, '_attn_implementation_internal', 'eager') print(f'num_layer: {llm_config.num_hidden_layers}, context_length: {context_length}, arn: {ARN}') with event_marker('KV FP model'): kv_fp_base_model = modeling_llama.LlamaForCausalLM.from_pretrained(model_id, config=llm_config) os.environ['TOKENIZERS_PARALLELISM'] = '0' tokenizer = AutoTokenizer.from_pretrained(model_id, cache_dir=cache_dir, use_fast=True, trust_remote_code=True) ## Adjust the tokenizer to limit to context length tokenizer.model_max_length = context_length # ### 9. Loading LORA Adapter on base KV MHA Model # In[21]: # loading only 1 adapter to have adapted graph , adapter_name= "french" # peft_model_id="kaitchup/Llama-2-7b-mt-French-to-English" peft_model_id="french" lora_model = PeftModel.from_pretrained(kv_fp_base_model, peft_model_id, adapter_name=adapter_name) dummy_adapter_name = "k_v_adapter" lora_model.add_adapter(dummy_adapter_name, k_v_lora_config) # Write the lora's for k and v with zeros # not doing this due to graph issue reported for g2g for name, param in lora_model.named_parameters(): if dummy_adapter_name in name and "lora" in name: param.data.fill_(0.0) combined_adapter_name = "combined_adapter" lora_model.add_weighted_adapter( adapters=[adapter_name, dummy_adapter_name], weights=[1.0, 1.0], adapter_name=combined_adapter_name, combination_type="linear" ) lora_model.set_adapter(combined_adapter_name) lora_model.delete_adapter(adapter_name) lora_model.delete_adapter(dummy_adapter_name) # Replace lora layer with quantizable layers replace_lora_layers_with_quantizable_layers(lora_model) # linear to conv adaptation lora_model=replace_linears_with_convs(lora_model) kv_mha_fp_model = lora_model.base_model.model # ### 10. Adapted KV Cache MHA Model Preparation # #### Estimated running time: ~ 1h 20m # In[22]: import aimet_torch.pro.ir_graph_op_handler as ir_graph_op_handler from aimet_torch.pro import model_preparer # Setting this flag to False means that the prepared model will be flattened # This flag must be set to false because we rely on the model structure being flat to enable weight sharing ir_graph_op_handler.KEEP_ORIGINAL_MODEL_STRUCTURE = False dummy_input = get_dummy_data('kvcache', llm_config.num_hidden_layers, llm_config.hidden_size, llm_config.num_attention_heads, llm_config.rope_theta, tokenizer, 'cpu', separate_tuple_input_output=False, num_tokens=ARN, concat_head_in_batch_dimension=llm_config.concat_head_in_batch_dimension) input_names, output_names = get_input_output_names( num_layers=llm_config.num_hidden_layers, past_key_values_in=True, separate_tuple_input_output=True) # Build the converter args converter_args_param = ['--input_layout'] converter_args_value = 'NONTRIVIAL' converter_args = [] for input_param in converter_args_param: for input_name in input_names: converter_args += [input_param, input_name, converter_args_value] with event_marker("KV MHA Model prepare", flush_ram=True): kv_mha_prepared_model = model_preparer.prepare_model(kv_mha_fp_model, dummy_input, filename="kv_mha_prepared_model", path=output_dir, input_names=input_names, output_names=output_names, converter_args=converter_args, skipped_optimizers=['eliminate_common_subexpression','eliminate_nop_with_unit', 'eliminate_duplicate_initializer'], ) del kv_mha_fp_model # # ### 11. Create Adapted KVcache MHA Quantsim and Apply Encodings from Adapted BERT MHA Model # In[23]: kvcache_fpm = LLMForwardPassManager(cfg=llm_config, model=kv_mha_prepared_model, tokenizer=tokenizer, model_mode='kvcache', num_logits_to_return=ARN, separate_tuple_input_output=True, num_tokens=ARN) llm_config.concat_head_in_batch_dimension = False dummy_input = get_dummy_data('kvcache', llm_config.num_hidden_layers, llm_config.hidden_size, llm_config.num_attention_heads, llm_config.rope_theta, tokenizer, device, separate_tuple_input_output=True, num_tokens=ARN, concat_head_in_batch_dimension=llm_config.concat_head_in_batch_dimension) with event_marker("create KV Quantsim"): with kvcache_fpm.place_on_device(device): kv_quant_sim = QuantizationSimModel(model=kvcache_fpm.model, quant_scheme=QuantScheme.post_training_tf, dummy_input=dummy_input, default_output_bw=16, default_param_bw=4, in_place=True, config_file=htp_config_file, ) # In[24]: ### Setting 16*8 malmuls from aimet_torch.v2.experimental.quantsim_utils import set_matmul_second_input_producer_to_8bit_symmetric set_matmul_second_input_producer_to_8bit_symmetric(kv_quant_sim) # #### Concat encoding unification # In[25]: from aimet_torch.v2.experimental import propagate_output_encodings import aimet_torch.elementwise_ops as aimet_ops propagate_output_encodings(kv_quant_sim, aimet_ops.Concat) # #### Mixed precision config # In[26]: from mixed_precision_overrides import ManualQuantsimMixedPrecisionConfig with event_marker("Apply Mixed Precision", flush_ram=True): quantsim_adjuster = ManualQuantsimMixedPrecisionConfig(mixed_precision_config_file= "./config/mixed_precision_profiles/w4_a16_exceptions_llama_v2_prepared_disableRMSNorm_clampgateprojconv_bundledkv.json") quantsim_adjuster.apply_exceptions(kv_quant_sim) # #### Setup lora layer to be 16bit per tensor # In[31]: import json import pickle with open(os.path.join(output_dir,'meta_data.pkl'), "rb") as f: meta_data_file = pickle.load(f) with open(os.path.join(output_dir,'kv_mha_prepared_model.json')) as f: name_to_module_dict = json.load(f) peft_utils = PeftQuantUtils(adapater_name_to_meta_data=meta_data_file, name_to_module_dict=name_to_module_dict) ## do this if changing for lora layers for _,module in peft_utils.get_quantized_lora_layer(kv_quant_sim): # setting 16 bit per tensor module.param_quantizers['weight'] = QuantizeDequantize(shape=(1, 1, 1, 1), bitwidth=16, symmetric=True).to(module.weight.device) peft_utils.quantize_lora_scale_with_fixed_range(kv_quant_sim, 16, 0.0, 1.0) peft_utils.disable_lora_adapters(kv_quant_sim) # #### Mapping Base Encodings and Loading Mapped Encodings into Quantizer # In[32]: from encodings_mapper import EncodingsMapper encoding_file = os.path.join(output_dir, 'base_encoding.json') _ , mapped_encoding_file = EncodingsMapper(llm_config, output_dir, encoding_file).map_encodings() kv_quant_sim.load_encodings(mapped_encoding_file, partial=False) # ### 12. Export KVCache Model Onnx and encodings # #### Estimated running time: ~ 1h # In[34]: from aimet_torch.utils import change_tensor_device_placement from aimet_torch.onnx_utils import OnnxExportApiArgs from aimet_torch import onnx_utils from aimet_utils.clip_weights import clip_weights_to_7f7f onnx_dir = os.path.join(output_dir, 'onnx') os.makedirs(onnx_dir, exist_ok=True) input_names, output_names = get_input_output_names( num_layers=llm_config.num_hidden_layers, past_key_values_in=True, separate_tuple_input_output=True) onnx_utils.RESTORE_ONNX_MODEL_INITIALIZERS = True clip_weights_to_7f7f(kv_quant_sim) onnx_api_args = OnnxExportApiArgs(input_names=input_names,output_names=output_names) sample_inputs = change_tensor_device_placement(dummy_input, torch.device('cpu')) filename_prefix = f"llamav2_AR{ARN}" filename_prefix_encodings = f"{filename_prefix}_base" with event_marker("KVCache export onnx and test vectors", flush_ram=True): kv_quant_sim.export(onnx_dir, filename_prefix, sample_inputs, onnx_export_args=onnx_api_args,export_model=True, filename_prefix_encodings=filename_prefix_encodings) # exporting tokenizer tokenizer_dir = os.path.join(output_dir, 'tokenizer') os.makedirs(tokenizer_dir, exist_ok=True) tokenizer.save_pretrained(tokenizer_dir) # #### Create sample test vectors for QNN SDK # ##### Estimated running time: ~ 9m # In[35]: from test_vectors import generate_test_vectors test_vector_layers = [ "model_layers_\\d+_input_layernorm_Pow", "model_layers_\\d+_input_layernorm_Cast", "lm_head_conv_Conv", "lm_head_MatMul", "model.layers\\d+.input_layernorm.cast", "lm_head_conv", "lm_head" ] with event_marker("generate test vector"): generate_test_vectors(kv_quant_sim, kvcache_fpm, train_dataloader, output_dir, num_batches=1, test_vector_layers=test_vector_layers, input_names=input_names) # #### Mapping Encoding from Bert to Kvcache and Export encodings for Adapters # ##### Estimated running time : ~ 10m # In[36]: from encodings_mapper import EncodingsMapper peft_utils.freeze_base_model_param_quantizers(kv_quant_sim) for adapter_name,peft_model_id in lora_adapter_dict.items(): peft_utils.enable_adapter_and_load_weights(kv_quant_sim,os.path.join(output_dir,f'{adapter_name}.safetensor')) encoding_file = os.path.join(output_dir, f'{adapter_name}_adapter_encoding.json') _ , mapped_encoding_file = EncodingsMapper(llm_config, output_dir, encoding_file).map_encodings() kv_quant_sim.load_encodings(mapped_encoding_file, partial=False) clip_weights_to_7f7f(kv_quant_sim) peft_utils.export_adapter_weights(kv_quant_sim, output_dir, f'{adapter_name}_onnx') filename_prefix_encodings = f"{filename_prefix}_{adapter_name}" with event_marker(f"KVCache export {adapter_name} adapter encodings", flush_ram=True): kv_quant_sim.export(onnx_dir, filename_prefix, sample_inputs, onnx_export_args=onnx_api_args,export_model=False, filename_prefix_encodings=filename_prefix_encodings) # --- # ## Summary # In[37]: from aimet_torch.pro.utils.profiler import EventProfiler EventProfiler().report() EventProfiler().json_dump(os.path.join(output_dir, 'profiling_stats'))

[@echo off setlocal EnableDelayedExpansion REM 配置目标根目录(按需修改) set "TARGET_ROOT=C:\Users\QT-166\Desktop\ceshi" echo 请将文件拖到本窗口,然后按回车继续... set /p "file_list=拖入文件后按回车: " for %%F in ("%file_list%") do ( set "original_path=%%~F" set "file_name=%%~nxF" REM 网络路径智能处理 if "!original_path:~0,2!"=="\\" ( set "full_parent=%%~dpF" set "full_parent=!full_parent:~0,-1!" echo [调试] 正在映射网络目录:!full_parent! pushd "!full_parent!" 2>nul || ( echo [错误] 网络路径映射失败:!full_parent! goto :nextfile ) set "mapped_drive=!cd!" set "full_path=!mapped_drive!\!file_name!" popd ) else ( set "full_path=!original_path!" ) REM 精准路径截断逻辑 set "std_marker=\STD-PARMS\" set "path_var=!full_path:%std_marker%=#STD-PARMS#!" for /f "tokens=2 delims=#" %%A in ("!path_var!") do ( set "relative_path=%%A" set "relative_path=!relative_path:\%file_name%=!" ) if "!relative_path!"=="" ( echo [错误] 路径解析失败:!file_name! goto :nextfile ) REM 构建目标路径 set "dest_dir=!TARGET_ROOT!\!std_marker!\!relative_path!" if not exist "!dest_dir!" mkdir "!dest_dir!" 2>nul || ( echo [错误] 目录创建失败:!dest_dir! goto :nextfile ) REM 执行文件复制 copy /Y "!full_path!" "!dest_dir!\" >nul && ( echo [成功] 复制到:!dest_dir!\!file_name! ) || ( echo [错误] 复制失败:!file_name! ) :nextfile set "relative_path=" set "mapped_drive=" ) echo 操作执行完毕,按任意键退出... pause >nul]以上代码复制文件出错,信息如下:请将文件拖到本窗口,然后按回车继续... 拖入文件后按回车: \\10.50.64.12\std\vt_std_lib\STD-PARMS\AIRTAC\chuanganqi\DMSG\DMSG.prt [调试] 正在映射网络目录:\\10.50.64.12\std\vt_std_lib\STD-PARMS\AIRTAC\chuanganqi\DMSG 系统找不到指定的驱动器。 [错误] 复制失败:DMSG.prt

fatal: [192.168.5.158]: FAILED! => {"changed": true, "cmd": "cd /usr/src/php-5.3.28/ && ./configure --prefix=/usr/local/php5 --with-gd --with-zlib --with-mysql=/usr/local/mysql --with-config-file-path=/usr/local/php5 --enable-mbstring --enable-fpm --with-jpeg-dir=/usr/lib && make && make install", "delta": "0:00:07.761196", "end": "2025-06-26 15:07:53.247233", "msg": "non-zero return code", "rc": 1, "start": "2025-06-26 15:07:45.486037", "stderr": "configure: warning: bison versions supported for regeneration of the Zend/PHP parsers: 1.28 1.35 1.75 1.875 2.0 2.1 2.2 2.3 2.4 2.4.1 2.4.2 2.4.3 2.5 2.5.1 2.6 2.6.1 2.6.2 2.6.4 (found: none).\nconfigure: warning: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers.\nconfigure: error: Cannot find MySQL header files under /usr/local/mysql.\nNote that the MySQL client library is not bundled anymore!", "stderr_lines": ["configure: warning: bison versions supported for regeneration of the Zend/PHP parsers: 1.28 1.35 1.75 1.875 2.0 2.1 2.2 2.3 2.4 2.4.1 2.4.2 2.4.3 2.5 2.5.1 2.6 2.6.1 2.6.2 2.6.4 (found: none).", "configure: warning: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers.", "configure: error: Cannot find MySQL header files under /usr/local/mysql.", "Note that the MySQL client library is not bundled anymore!"], "stdout": "creating cache ./config.cache\nchecking for Cygwin environment... no\nchecking for mingw32 environment... no\nchecking for egrep... grep -E\nchecking for a sed that does not truncate output... /usr/bin/sed\nchecking host system type... x86_64-unknown-linux-gnu\nchecking target system type... x86_64-unknown-linux-gnu\nchecking for gcc... gcc\nchecking whether the C compiler (gcc ) works... yes\nchecking whether the C compiler (gcc ) is a cross-compiler... no\nchecking whether we are using GNU C... yes\nchecking whether gcc accepts -g... yes\nchecking how to run the C preprocessor... gcc -E\nchecking for icc... no\nchecking for suncc... no\nchecking whether gcc and cc understand -c and -o together... yes\nchecking how to run the C preprocessor... gcc -E\nchecking for AIX... no\nchecking whether ln -s works... yes\nchecking for system library directory... lib\nchecking whether to enable runpaths... yes\nchecking if compiler supports -R... no\nchecking if compiler supports -Wl,-rpath,... yes\nchecking for gawk... gawk\nchecking for bison... no\nchecking for byacc... no\nchecking for bison version... invalid\nchecking for re2c... no\nchecking whether to enable computed goto gcc extension with re2c... no\nchecking whether to force non-PIC code in shared modules... no\nchecking whether /dev/urandom exists... yes\nchecking for pthreads_cflags... -pthread\nchecking for pthreads_lib... \n\n\u001b[1mConfiguring SAPI modules\u001b[m\nchecking for AOLserver support... no\nchecking for Apache 1.x module support via DSO through APXS... no\nchecking for Apache 1.x module support... no\nchecking whether to enable Apache charset compatibility option... no\nchecking for Apache 2.0 filter-module support via DSO through APXS... no\nchecking for Apache 2.0 handler-module support via DSO through APXS... no\nchecking for Apache 1.x (hooks) module support via DSO through APXS... no\nchecking for Apache 1.x (hooks) module support... no\nchecking whether to enable Apache charset compatibility option... no\nchecking for Caudium support... no\nchecking for CLI build... yes\nchecking for Continuity support... no\nchecking for embedded SAPI library support... no\nchecking for FPM build... yes\nchecking for setenv... yes\nchecking for clearenv... yes\nchecking for setproctitle... no\nchecking for library containing socket... none required\nchecking for library containing inet_addr... none required\nchecking for errno.h... yes\nchecking for fcntl.h... yes\nchecking for stdio.h... yes\nchecking for stdlib.h... yes\nchecking for unistd.h... yes\nchecking for sys/uio.h... yes\nchecking for sys/select.h... yes\nchecking for sys/socket.h... yes\nchecking for sys/time.h... yes\nchecking for arpa/inet.h... yes\nchecking for netinet/in.h... yes\nchecking for sysexits.h... yes\nchecking for prctl... yes\nchecking for clock_gettime... yes\nchecking for ptrace... yes\nchecking whether ptrace works... yes\nchecking for proc mem file... mem\nchecking if gcc supports __sync_bool_compare_and_swap... yes\nchecking for TCP_INFO... yes\nchecking for sysconf... yes\nchecking for times... yes\nchecking for kqueue... no\nchecking for port framework... no\nchecking for /dev/poll... no\nchecking for epoll... yes\nchecking for poll... yes\nchecking for select... yes\nchecking for Zeus ISAPI support... no\nchecking for LiteSpeed support... no\nchecking for Milter support... no\nchecking for NSAPI support... no\nchecking for PHTTPD support... no\nchecking for Pi3Web support... no\nchecking whether Roxen module is build using ZTS... no\nchecking for Roxen/Pike support... \nchecking for thttpd... no\nchecking for TUX... no\nchecking for webjames... no\nchecking for chosen SAPI module... fpm\n\n\u001b[1mRunning system checks\u001b[m\nchecking for sendmail... no\nchecking whether system uses EBCDIC... no\nchecking whether byte ordering is bigendian... no\nchecking whether writing to stdout works... This is the test message -- yes\nchecking for socket... yes\nchecking for socketpair... yes\nchecking for htonl... yes\nchecking for gethostname... yes\nchecking for gethostbyaddr... yes\nchecking for yp_get_default_domain... no\nchecking for __yp_get_default_domain... no\nchecking for yp_get_default_domain in -lnsl... no\nchecking for __yp_get_default_domain in -lnsl... no\nchecking for dlopen... no\nchecking for __dlopen... no\nchecking for dlopen in -ldl... yes\nchecking for sin in -lm... yes\nchecking for inet_aton... yes\nchecking for ANSI C header files... yes\nchecking for dirent.h that defines DIR... yes\nchecking for opendir in -ldir... no\nchecking for inttypes.h... yes\nchecking for stdint.h... yes\nchecking for dirent.h... yes\nchecking for ApplicationServices/ApplicationServices.h... no\nchecking for sys/param.h... yes\nchecking for sys/types.h... yes\nchecking for sys/time.h... (cached) yes\nchecking for netinet/in.h... (cached) yes\nchecking for alloca.h... yes\nchecking for arpa/inet.h... (cached) yes\nchecking for arpa/nameser.h... yes\nchecking for assert.h... yes\nchecking for crypt.h... yes\nchecking for dns.h... no\nchecking for fcntl.h... (cached) yes\nchecking for grp.h... yes\nchecking for ieeefp.h... no\nchecking for langinfo.h... yes\nchecking for limits.h... yes\nchecking for locale.h... yes\nchecking for monetary.h... yes\nchecking for netdb.h... yes\nchecking for pwd.h... yes\nchecking for resolv.h... yes\nchecking for signal.h... yes\nchecking for stdarg.h... yes\nchecking for stdlib.h... (cached) yes\nchecking for string.h... yes\nchecking for syslog.h... yes\nchecking for sysexits.h... (cached) yes\nchecking for sys/ioctl.h... yes\nchecking for sys/file.h... yes\nchecking for sys/mman.h... yes\nchecking for sys/mount.h... yes\nchecking for sys/poll.h... yes\nchecking for sys/resource.h... yes\nchecking for sys/select.h... (cached) yes\nchecking for sys/socket.h... (cached) yes\nchecking for sys/stat.h... yes\nchecking for sys/statfs.h... yes\nchecking for sys/statvfs.h... yes\nchecking for sys/vfs.h... yes\nchecking for sys/sysexits.h... no\nchecking for sys/varargs.h... no\nchecking for sys/wait.h... yes\nchecking for sys/loadavg.h... no\nchecking for termios.h... yes\nchecking for unistd.h... (cached) yes\nchecking for unix.h... no\nchecking for utime.h... yes\nchecking for sys/utsname.h... yes\nchecking for sys/ipc.h... yes\nchecking for dlfcn.h... yes\nchecking for assert.h... (cached) yes\nchecking for fopencookie... yes\nchecking for broken getcwd... no\nchecking for broken libc stdio... yes\nchecking whether struct tm is in sys/time.h or time.h... time.h\nchecking for tm_zone in struct tm... yes\nchecking for missing declarations of reentrant functions... done\nchecking for fclose declaration... ok\nchecking for tm_gmtoff in struct tm... yes\nchecking for struct flock... yes\nchecking for socklen_t... yes\nchecking size of size_t... 8\nchecking size of long long... 8\nchecking size of long long int... 8\nchecking size of long... 8\nchecking size of int... 4\nchecking size of intmax_t... 8\nchecking size of ssize_t... 8\nchecking size of ptrdiff_t... 8\nchecking for st_blksize in struct stat... yes\nchecking for st_blocks in struct stat... yes\nchecking for st_rdev in struct stat... yes\nchecking for size_t... yes\nchecking for uid_t in sys/types.h... yes\nchecking for struct sockaddr_storage... yes\nchecking for field sa_len in struct sockaddr... no\nchecking for IPv6 support... yes\nchecking for vprintf... yes\nchecking for alphasort... yes\nchecking for asctime_r... yes\nchecking for chroot... yes\nchecking for ctime_r... yes\nchecking for cuserid... yes\nchecking for crypt... no\nchecking for flock... yes\nchecking for ftok... yes\nchecking for funopen... no\nchecking for gai_strerror... yes\nchecking for gcvt... yes\nchecking for getloadavg... yes\nchecking for getlogin... yes\nchecking for getprotobyname... yes\nchecking for getprotobynumber... yes\nchecking for getservbyname... yes\nchecking for getservbyport... yes\nchecking for gethostname... (cached) yes\nchecking for getrusage... yes\nchecking for gettimeofday... yes\nchecking for gmtime_r... yes\nchecking for getpwnam_r... yes\nchecking for getgrnam_r... yes\nchecking for getpwuid_r... yes\nchecking for grantpt... yes\nchecking for inet_ntoa... yes\nchecking for inet_ntop... yes\nchecking for inet_pton... yes\nchecking for isascii... yes\nchecking for link... yes\nchecking for localtime_r... yes\nchecking for lockf... yes\nchecking for lchown... yes\nchecking for lrand48... yes\nchecking for memcpy... yes\nchecking for memmove... yes\nchecking for mkstemp... yes\nchecking for mmap... yes\nchecking for nl_langinfo... yes\nchecking for perror... yes\nchecking for poll... yes\nchecking for ptsname... yes\nchecking for putenv... yes\nchecking for realpath... yes\nchecking for random... yes\nchecking for rand_r... yes\nchecking for scandir... yes\nchecking for setitimer... yes\nchecking for setlocale... yes\nchecking for localeconv... yes\nchecking for setenv... (cached) yes\nchecking for setpgid... yes\nchecking for setsockopt... yes\nchecking for setvbuf... yes\nchecking for shutdown... yes\nchecking for sin... yes\nchecking for snprintf... yes\nchecking for srand48... yes\nchecking for srandom... yes\nchecking for statfs... yes\nchecking for statvfs... yes\nchecking for std_syslog... no\nchecking for strcasecmp... yes\nchecking for strcoll... yes\nchecking for strdup... yes\nchecking for strerror... yes\nchecking for strftime... yes\nchecking for strnlen... yes\nchecking for strptime... yes\nchecking for strstr... yes\nchecking for strtok_r... yes\nchecking for symlink... yes\nchecking for tempnam... yes\nchecking for tzset... yes\nchecking for unlockpt... yes\nchecking for unsetenv... yes\nchecking for usleep... yes\nchecking for utime... yes\nchecking for vsnprintf... yes\nchecking for vasprintf... yes\nchecking for asprintf... yes\nchecking for nanosleep... yes\nchecking for nanosleep in -lrt... yes\nchecking for getaddrinfo... yes\nchecking for __sync_fetch_and_add... yes\nchecking for strlcat... no\nchecking for strlcpy... no\nchecking for getopt... yes\nchecking whether utime accepts a null argument... yes\nchecking for working alloca.h... (cached) yes\nchecking for alloca... yes\nchecking for declared timezone... yes\nchecking for type of reentrant time-related functions... POSIX\nchecking for readdir_r... yes\nchecking for type of readdir_r... POSIX\nchecking for in_addr_t... yes\nchecking for crypt_r... no\n\n\u001b[1mGeneral settings\u001b[m\nchecking whether to include gcov symbols... no\nchecking whether to include debugging symbols... no\nchecking layout of installed files... PHP\nchecking path to configuration file... /usr/local/php5\nchecking where to scan for configuration files... \nchecking whether to enable safe mode by default... no\nchecking for safe mode exec dir... /usr/local/php/bin\nchecking whether to enable PHP's own SIGCHLD handler... no\nchecking whether to enable magic quotes by default... no\nchecking whether to explicitly link against libgcc... no\nchecking whether to enable short tags by default... yes\nchecking whether to enable dmalloc... no\nchecking whether to enable IPv6 support... yes\nchecking how big to make fd sets... using system default\n\n\u001b[1mConfiguring extensions\u001b[m\nchecking size of long... (cached) 8\nchecking size of int... (cached) 4\nchecking for int32_t... yes\nchecking for uint32_t... yes\nchecking for sys/types.h... (cached) yes\nchecking for inttypes.h... (cached) yes\nchecking for stdint.h... (cached) yes\nchecking for string.h... (cached) yes\nchecking for stdlib.h... (cached) yes\nchecking for strtoll... yes\nchecking for atoll... yes\nchecking for strftime... (cached) yes\nchecking which regex library to use... php\nchecking whether to enable LIBXML support... yes\nchecking libxml2 install dir... no\nchecking for xml2-config path... /usr/bin/xml2-config\nchecking whether libxml build works... yes\nchecking for OpenSSL support... no\nchecking for Kerberos support... no\nchecking for PCRE library to use... bundled\nchecking whether to enable the SQLite3 extension... yes\nchecking bundled sqlite3 library... yes\nchecking for ZLIB support... yes\nchecking if the location of ZLIB install directory is defined... no\nchecking for gzgets in -lz... yes\nchecking whether to enable bc style precision math functions... no\nchecking for BZip2 support... no\nchecking whether to enable calendar conversion support... no\nchecking whether to enable ctype functions... yes\nchecking for cURL support... no\nchecking if we should use cURL for url streams... no\nchecking for QDBM support... no\nchecking for GDBM support... no\nchecking for NDBM support... no\nchecking for Berkeley DB4 support... no\nchecking for Berkeley DB3 support... no\nchecking for Berkeley DB2 support... no\nchecking for DB1 support... no\nchecking for DBM support... no\nchecking for CDB support... no\nchecking for INI File support... no\nchecking for FlatFile support... no\nchecking whether to enable DBA interface... no\nchecking whether to enable DOM support... yes\nchecking for xml2-config path... (cached) /usr/bin/xml2-config\nchecking whether libxml build works... (cached) yes\nchecking for ENCHANT support... no\nchecking whether to enable EXIF (metadata from images) support... no\nchecking for fileinfo support... yes\nchecking for utimes... yes\nchecking for strndup... yes\nchecking whether to enable input filter support... yes\nchecking pcre install prefix... no\nchecking whether to enable FTP support... no\nchecking OpenSSL dir for FTP... no\nchecking for GD support... yes\nchecking for the location of libjpeg... /usr/lib\nchecking for the location of libpng... no\nchecking for the location of libXpm... no\nchecking for FreeType 2... no\nchecking for T1lib support... no\nchecking whether to enable truetype string function in GD... no\nchecking whether to enable JIS-mapped Japanese font support in GD... no\nchecking for fabsf... yes\nchecking for floorf... yes\nchecking for jpeg_read_header in -ljpeg... yes\nchecking for png_write_image in -lpng... yes\nIf configure fails try --with-xpm-dir=<DIR>\nIf configure fails try --with-freetype-dir=<DIR>\nchecking for GNU gettext support... no\nchecking for GNU MP support... no\nchecking for mhash support... no\nchecking whether to enable hash support... yes\nchecking whether byte ordering is bigendian... (cached) no\nchecking size of short... 2\nchecking size of int... (cached) 4\nchecking size of long... (cached) 8\nchecking size of long long... (cached) 8\nchecking for iconv support... yes\nchecking for iconv... yes\nchecking if iconv is glibc's... yes\nchecking if iconv supports errno... yes\nchecking if your cpp allows macro usage in include lines... yes\nchecking for IMAP support... no\nchecking for IMAP Kerberos support... no\nchecking for IMAP SSL support... no\nchecking for InterBase support... no\nchecking whether to enable internationalization support... no\nchecking whether to enable JavaScript Object Serialization support... yes\nchecking for ANSI C header files... (cached) yes\nchecking for LDAP support... no\nchecking for LDAP Cyrus SASL support... no\nchecking whether to enable multibyte string support... yes\nchecking whether to enable multibyte regex support... yes\nchecking whether to check multibyte regex backtrack... yes\nchecking for external libmbfl... no\nchecking for external oniguruma... no\nchecking for variable length prototypes and stdarg.h... yes\nchecking for stdlib.h... (cached) yes\nchecking for string.h... (cached) yes\nchecking for strings.h... yes\nchecking for unistd.h... (cached) yes\nchecking for sys/time.h... (cached) yes\nchecking for sys/times.h... yes\nchecking for stdarg.h... (cached) yes\nchecking size of int... (cached) 4\nchecking size of short... (cached) 2\nchecking size of long... (cached) 8\nchecking for working const... yes\nchecking whether time.h and sys/time.h may both be included... yes\nchecking for working alloca.h... (cached) yes\nchecking for alloca... (cached) yes\nchecking for 8-bit clean memcmp... yes\nchecking for stdarg.h... (cached) yes\nchecking for mcrypt support... no\nchecking for MSSQL support via FreeTDS... no\nchecking for MySQL support... yes\nchecking for specified location of the MySQL UNIX socket... no", "stdout_lines": ["creating cache ./config.cache", "checking for Cygwin environment... no", "checking for mingw32 environment... no", "checking for egrep... grep -E", "checking for a sed that does not truncate output... /usr/bin/sed", "checking host system type... x86_64-unknown-linux-gnu", "checking target system type... x86_64-unknown-linux-gnu", "checking for gcc... gcc", "checking whether the C compiler (gcc ) works... yes", "checking whether the C compiler (gcc ) is a cross-compiler... no", "checking whether we are using GNU C... yes", "checking whether gcc accepts -g... yes", "checking how to run the C preprocessor... gcc -E", "checking for icc... no", "checking for suncc... no", "checking whether gcc and cc understand -c and -o together... yes", "checking how to run the C preprocessor... gcc -E", "checking for AIX... no", "checking whether ln -s works... yes", "checking for system library directory... lib", "checking whether to enable runpaths... yes", "checking if compiler supports -R... no", "checking if compiler supports -Wl,-rpath,... yes", "checking for gawk... gawk", "checking for bison... no", "checking for byacc... no", "checking for bison version... invalid", "checking for re2c... no", "checking whether to enable computed goto gcc extension with re2c... no", "checking whether to force non-PIC code in shared modules... no", "checking whether /dev/urandom exists... yes", "checking for pthreads_cflags... -pthread", "checking for pthreads_lib... ", "", "\u001b[1mConfiguring SAPI modules\u001b[m", "checking for AOLserver support... no", "checking for Apache 1.x module support via DSO through APXS... no", "checking for Apache 1.x module support... no", "checking whether to enable Apache charset compatibility option... no", "checking for Apache 2.0 filter-module support via DSO through APXS... no", "checking for Apache 2.0 handler-module support via DSO through APXS... no", "checking for Apache 1.x (hooks) module support via DSO through APXS... no", "checking for Apache 1.x (hooks) module support... no", "checking whether to enable Apache charset compatibility option... no", "checking for Caudium support... no", "checking for CLI build... yes", "checking for Continuity support... no", "checking for embedded SAPI library support... no", "checking for FPM build... yes", "checking for setenv... yes", "checking for clearenv... yes", "checking for setproctitle... no", "checking for library containing socket... none required", "checking for library containing inet_addr... none required", "checking for errno.h... yes", "checking for fcntl.h... yes", "checking for stdio.h... yes", "checking for stdlib.h... yes", "checking for unistd.h... yes", "checking for sys/uio.h... yes", "checking for sys/select.h... yes", "checking for sys/socket.h... yes", "checking for sys/time.h... yes", "checking for arpa/inet.h... yes", "checking for netinet/in.h... yes", "checking for sysexits.h... yes", "checking for prctl... yes", "checking for clock_gettime... yes", "checking for ptrace... yes", "checking whether ptrace works... yes", "checking for proc mem file... mem", "checking if gcc supports __sync_bool_compare_and_swap... yes", "checking for TCP_INFO... yes", "checking for sysconf... yes", "checking for times... yes", "checking for kqueue... no", "checking for port framework... no", "checking for /dev/poll... no", "checking for epoll... yes", "checking for poll... yes", "checking for select... yes", "checking for Zeus ISAPI support... no", "checking for LiteSpeed support... no", "checking for Milter support... no", "checking for NSAPI support... no", "checking for PHTTPD support... no", "checking for Pi3Web support... no", "checking whether Roxen module is build using ZTS... no", "checking for Roxen/Pike support... ", "checking for thttpd... no", "checking for TUX... no", "checking for webjames... no", "checking for chosen SAPI module... fpm", "", "\u001b[1mRunning system checks\u001b[m", "checking for sendmail... no", "checking whether system uses EBCDIC... no", "checking whether byte ordering is bigendian... no", "checking whether writing to stdout works... This is the test message -- yes", "checking for socket... yes", "checking for socketpair... yes", "checking for htonl... yes", "checking for gethostname... yes", "checking for gethostbyaddr... yes", "checking for yp_get_default_domain... no", "checking for __yp_get_default_domain... no", "checking for yp_get_default_domain in -lnsl... no", "checking for __yp_get_default_domain in -lnsl... no", "checking for dlopen... no", "checking for __dlopen... no", "checking for dlopen in -ldl... yes", "checking for sin in -lm... yes", "checking for inet_aton... yes", "checking for ANSI C header files... yes", "checking for dirent.h that defines DIR... yes", "checking for opendir in -ldir... no", "checking for inttypes.h... yes", "checking for stdint.h... yes", "checking for dirent.h... yes", "checking for ApplicationServices/ApplicationServices.h... no", "checking for sys/param.h... yes", "checking for sys/types.h... yes", "checking for sys/time.h... (cached) yes", "checking for netinet/in.h... (cached) yes", "checking for alloca.h... yes", "checking for arpa/inet.h... (cached) yes", "checking for arpa/nameser.h... yes", "checking for assert.h... yes", "checking for crypt.h... yes", "checking for dns.h... no", "checking for fcntl.h... (cached) yes", "checking for grp.h... yes", "checking for ieeefp.h... no", "checking for langinfo.h... yes", "checking for limits.h... yes", "checking for locale.h... yes", "checking for monetary.h... yes", "checking for netdb.h... yes", "checking for pwd.h... yes", "checking for resolv.h... yes", "checking for signal.h... yes", "checking for stdarg.h... yes", "checking for stdlib.h... (cached) yes", "checking for string.h... yes", "checking for syslog.h... yes", "checking for sysexits.h... (cached) yes", "checking for sys/ioctl.h... yes", "checking for sys/file.h... yes", "checking for sys/mman.h... yes", "checking for sys/mount.h... yes", "checking for sys/poll.h... yes", "checking for sys/resource.h... yes", "checking for sys/select.h... (cached) yes", "checking for sys/socket.h... (cached) yes", "checking for sys/stat.h... yes", "checking for sys/statfs.h... yes", "checking for sys/statvfs.h... yes", "checking for sys/vfs.h... yes", "checking for sys/sysexits.h... no", "checking for sys/varargs.h... no", "checking for sys/wait.h... yes", "checking for sys/loadavg.h... no", "checking for termios.h... yes", "checking for unistd.h... (cached) yes", "checking for unix.h... no", "checking for utime.h... yes", "checking for sys/utsname.h... yes", "checking for sys/ipc.h... yes", "checking for dlfcn.h... yes", "checking for assert.h... (cached) yes", "checking for fopencookie... yes", "checking for broken getcwd... no", "checking for broken libc stdio... yes", "checking whether struct tm is in sys/time.h or time.h... time.h", "checking for tm_zone in struct tm... yes", "checking for missing declarations of reentrant functions... done", "checking for fclose declaration... ok", "checking for tm_gmtoff in struct tm... yes", "checking for struct flock... yes", "checking for socklen_t... yes", "checking size of size_t... 8", "checking size of long long... 8", "checking size of long long int... 8", "checking size of long... 8", "checking size of int... 4", "checking size of intmax_t... 8", "checking size of ssize_t... 8", "checking size of ptrdiff_t... 8", "checking for st_blksize in struct stat... yes", "checking for st_blocks in struct stat... yes", "checking for st_rdev in struct stat... yes", "checking for size_t... yes", "checking for uid_t in sys/types.h... yes", "checking for struct sockaddr_storage... yes", "checking for field sa_len in struct sockaddr... no", "checking for IPv6 support... yes", "checking for vprintf... yes", "checking for alphasort... yes", "checking for asctime_r... yes", "checking for chroot... yes", "checking for ctime_r... yes", "checking for cuserid... yes", "checking for crypt... no", "checking for flock... yes", "checking for ftok... yes", "checking for funopen... no", "checking for gai_strerror... yes", "checking for gcvt... yes", "checking for getloadavg... yes", "checking for getlogin... yes", "checking for getprotobyname... yes", "checking for getprotobynumber... yes", "checking for getservbyname... yes", "checking for getservbyport... yes", "checking for gethostname... (cached) yes", "checking for getrusage... yes", "checking for gettimeofday... yes", "checking for gmtime_r... yes", "checking for getpwnam_r... yes", "checking for getgrnam_r... yes", "checking for getpwuid_r... yes", "checking for grantpt... yes", "checking for inet_ntoa... yes", "checking for inet_ntop... yes", "checking for inet_pton... yes", "checking for isascii... yes", "checking for link... yes", "checking for localtime_r... yes", "checking for lockf... yes", "checking for lchown... yes", "checking for lrand48... yes", "checking for memcpy... yes", "checking for memmove... yes", "checking for mkstemp... yes", "checking for mmap... yes", "checking for nl_langinfo... yes", "checking for perror... yes", "checking for poll... yes", "checking for ptsname... yes", "checking for putenv... yes", "checking for realpath... yes", "checking for random... yes", "checking for rand_r... yes", "checking for scandir... yes", "checking for setitimer... yes", "checking for setlocale... yes", "checking for localeconv... yes", "checking for setenv... (cached) yes", "checking for setpgid... yes", "checking for setsockopt... yes", "checking for setvbuf... yes", "checking for shutdown... yes", "checking for sin... yes", "checking for snprintf... yes", "checking for srand48... yes", "checking for srandom... yes", "checking for statfs... yes", "checking for statvfs... yes", "checking for std_syslog... no", "checking for strcasecmp... yes", "checking for strcoll... yes", "checking for strdup... yes", "checking for strerror... yes", "checking for strftime... yes", "checking for strnlen... yes", "checking for strptime... yes", "checking for strstr... yes", "checking for strtok_r... yes", "checking for symlink... yes", "checking for tempnam... yes", "checking for tzset... yes", "checking for unlockpt... yes", "checking for unsetenv... yes", "checking for usleep... yes", "checking for utime... yes", "checking for vsnprintf... yes", "checking for vasprintf... yes", "checking for asprintf... yes", "checking for nanosleep... yes", "checking for nanosleep in -lrt... yes", "checking for getaddrinfo... yes", "checking for __sync_fetch_and_add... yes", "checking for strlcat... no", "checking for strlcpy... no", "checking for getopt... yes", "checking whether utime accepts a null argument... yes", "checking for working alloca.h... (cached) yes", "checking for alloca... yes", "checking for declared timezone... yes", "checking for type of reentrant time-related functions... POSIX", "checking for readdir_r... yes", "checking for type of readdir_r... POSIX", "checking for in_addr_t... yes", "checking for crypt_r... no", "", "\u001b[1mGeneral settings\u001b[m", "checking whether to include gcov symbols... no", "checking whether to include debugging symbols... no", "checking layout of installed files... PHP", "checking path to configuration file... /usr/local/php5", "checking where to scan for configuration files... ", "checking whether to enable safe mode by default... no", "checking for safe mode exec dir... /usr/local/php/bin", "checking whether to enable PHP's own SIGCHLD handler... no", "checking whether to enable magic quotes by default... no", "checking whether to explicitly link against libgcc... no", "checking whether to enable short tags by default... yes", "checking whether to enable dmalloc... no", "checking whether to enable IPv6 support... yes", "checking how big to make fd sets... using system default", "", "\u001b[1mConfiguring extensions\u001b[m", "checking size of long... (cached) 8", "checking size of int... (cached) 4", "checking for int32_t... yes", "checking for uint32_t... yes", "checking for sys/types.h... (cached) yes", "checking for inttypes.h... (cached) yes", "checking for stdint.h... (cached) yes", "checking for string.h... (cached) yes", "checking for stdlib.h... (cached) yes", "checking for strtoll... yes", "checking for atoll... yes", "checking for strftime... (cached) yes", "checking which regex library to use... php", "checking whether to enable LIBXML support... yes", "checking libxml2 install dir... no", "checking for xml2-config path... /usr/bin/xml2-config", "checking whether libxml build works... yes", "checking for OpenSSL support... no", "checking for Kerberos support... no", "checking for PCRE library to use... bundled", "checking whether to enable the SQLite3 extension... yes", "checking bundled sqlite3 library... yes", "checking for ZLIB support... yes", "checking if the location of ZLIB install directory is defined... no", "checking for gzgets in -lz... yes", "checking whether to enable bc style precision math functions... no", "checking for BZip2 support... no", "checking whether to enable calendar conversion support... no", "checking whether to enable ctype functions... yes", "checking for cURL support... no", "checking if we should use cURL for url streams... no", "checking for QDBM support... no", "checking for GDBM support... no", "checking for NDBM support... no", "checking for Berkeley DB4 support... no", "checking for Berkeley DB3 support... no", "checking for Berkeley DB2 support... no", "checking for DB1 support... no", "checking for DBM support... no", "checking for CDB support... no", "checking for INI File support... no", "checking for FlatFile support... no", "checking whether to enable DBA interface... no", "checking whether to enable DOM support... yes", "checking for xml2-config path... (cached) /usr/bin/xml2-config", "checking whether libxml build works... (cached) yes", "checking for ENCHANT support... no", "checking whether to enable EXIF (metadata from images) support... no", "checking for fileinfo support... yes", "checking for utimes... yes", "checking for strndup... yes", "checking whether to enable input filter support... yes", "checking pcre install prefix... no", "checking whether to enable FTP support... no", "checking OpenSSL dir for FTP... no", "checking for GD support... yes", "checking for the location of libjpeg... /usr/lib", "checking for the location of libpng... no", "checking for the location of libXpm... no", "checking for FreeType 2... no", "checking for T1lib support... no", "checking whether to enable truetype string function in GD... no", "checking whether to enable JIS-mapped Japanese font support in GD... no", "checking for fabsf... yes", "checking for floorf... yes", "checking for jpeg_read_header in -ljpeg... yes", "checking for png_write_image in -lpng... yes", "If configure fails try --with-xpm-dir=<DIR>", "If configure fails try --with-freetype-dir=<DIR>", "checking for GNU gettext support... no", "checking for GNU MP support... no", "checking for mhash support... no", "checking whether to enable hash support... yes", "checking whether byte ordering is bigendian... (cached) no", "checking size of short... 2", "checking size of int... (cached) 4", "checking size of long... (cached) 8", "checking size of long long... (cached) 8", "checking for iconv support... yes", "checking for iconv... yes", "checking if iconv is glibc's... yes", "checking if iconv supports errno... yes", "checking if your cpp allows macro usage in include lines... yes", "checking for IMAP support... no", "checking for IMAP Kerberos support... no", "checking for IMAP SSL support... no", "checking for InterBase support... no", "checking whether to enable internationalization support... no", "checking whether to enable JavaScript Object Serialization support... yes", "checking for ANSI C header files... (cached) yes", "checking for LDAP support... no", "checking for LDAP Cyrus SASL support... no", "checking whether to enable multibyte string support... yes", "checking whether to enable multibyte regex support... yes", "checking whether to check multibyte regex backtrack... yes", "checking for external libmbfl... no", "checking for external oniguruma... no", "checking for variable length prototypes and stdarg.h... yes", "checking for stdlib.h... (cached) yes", "checking for string.h... (cached) yes", "checking for strings.h... yes", "checking for unistd.h... (cached) yes", "checking for sys/time.h... (cached) yes", "checking for sys/times.h... yes", "checking for stdarg.h... (cached) yes", "checking size of int... (cached) 4", "checking size of short... (cached) 2", "checking size of long... (cached) 8", "checking for working const... yes", "checking whether time.h and sys/time.h may both be included... yes", "checking for working alloca.h... (cached) yes", "checking for alloca... (cached) yes", "checking for 8-bit clean memcmp... yes", "checking for stdarg.h... (cached) yes", "checking for mcrypt support... no", "checking for MSSQL support via FreeTDS... no", "checking for MySQL support... yes", "checking for specified location of the MySQL UNIX socket... no"]}

#define MAX_PATH_LEN 1024 #define MAX_MATCHES 1024 #define DEBUG_PRINT(fmt, ...) \ do { fprintf(stderr, "[DEBUG] %s:%d: " fmt, __func__, __LINE__, ##__VA_ARGS__); } while (0) typedef enum { MODE_NEW, MODE_CYCLE, MODE_DIR_APPEND } completion_mode; typedef struct { char current_path[MAX_PATH_LEN]; char prefix[MAX_PATH_LEN]; char *matches[MAX_MATCHES]; int match_count; int current_index; int is_virtual_path; char *last_returned; char last_attempt[MAX_PATH_LEN]; } completion_state; static completion_state comp_state; /****************** 核心函数实现 ******************/ void reset_completion_state() { DEBUG_PRINT("Resetting completion state (had %d matches)\n", comp_state.match_count); for (int i = 0; i < comp_state.match_count; i++) { DEBUG_PRINT("Freeing match[%d]: %s\n", i, comp_state.matches[i]); XFREE(MTYPE_TMP, comp_state.matches[i]); } if (comp_state.last_returned) { DEBUG_PRINT("Freeing last_returned: %s\n", comp_state.last_returned); XFREE(MTYPE_TMP, comp_state.last_returned); } memset(&comp_state, 0, sizeof(comp_state)); } void parse_input_text(const char *text) { DEBUG_PRINT("Parsing input text: '%s'\n", text); strncpy(comp_state.last_attempt, text, MAX_PATH_LEN-1); comp_state.last_attempt[MAX_PATH_LEN-1] = '\0'; strncpy(comp_state.current_path, text, MAX_PATH_LEN-1); char *last_slash = strrchr(comp_state.current_path, '/'); if (last_slash) { strncpy(comp_state.prefix, last_slash+1, MAX_PATH_LEN-1); *(last_slash+1) = '\0'; DEBUG_PRINT("Split path: dir='%s' prefix='%s'\n", comp_state.current_path, comp_state.prefix); } else { strncpy(comp_state.prefix, comp_state.current_path, MAX_PATH_LEN-1); comp_state.current_path[0] = '\0'; DEBUG_PRINT("No slash, prefix='%s'\n", comp_state.prefix); } } /* 虚拟路径映射 */ static const char* map_virtual_path(const char *path) { if (strncmp(path, "flash:/", 7) == 0) { DEBUG_PRINT("Mapping virtual path '%s' to '/mnt/switch/'\n", path); return "/mnt/switch/"; } DEBUG_PRINT("No virtual mapping for '%s'\n", path); return path; } /* 核心目录扫描函数 */ void generate_current_dir_paths(const char *base_path, const char *prefix, int is_virtual, char **matches, int *match_count) { const char *search_path = is_virtual ? map_virtual_path(base_path) : base_path; DEBUG_PRINT("Scanning dir: base='%s' prefix='%s' (virtual=%d, mapped='%s')\n", base_path, prefix, is_virtual, search_path); DIR *dir = opendir(*search_path ? search_path : "."); if (!dir) { DEBUG_PRINT("Failed to open dir '%s'\n", search_path); return; } struct dirent *entry; while ((entry = readdir(dir)) != NULL && *match_count < MAX_MATCHES) { /* 跳过隐藏文件和当前目录 */ if (entry->d_name[0] == '.') { DEBUG_PRINT("Skipping hidden file: '%s'\n", entry->d_name); continue; } /* 前缀匹配检查 */ if (prefix[0] && strncmp(entry->d_name, prefix, strlen(prefix)) != 0) { DEBUG_PRINT("Skipping non-matching: '%s' (prefix='%s')\n", entry->d_name, prefix); continue; } /* 构建完整路径 */ char full_path[MAX_PATH_LEN]; snprintf(full_path, sizeof(full_path), "%s%s", base_path, entry->d_name); /* 添加目录分隔符 */ struct stat st; int is_dir = (stat(full_path, &st) == 0 && S_ISDIR(st.st_mode)); if (is_dir) { snprintf(full_path, sizeof(full_path), "%s%s/", base_path, entry->d_name); DEBUG_PRINT("Found directory: %s\n", full_path); } else { DEBUG_PRINT("Found file: %s\n", full_path); } /* 存储匹配项 */ matches[*match_count] = XSTRDUP(MTYPE_TMP, full_path); DEBUG_PRINT("Added match[%d]: %s\n", *match_count, matches[*match_count]); (*match_count)++; } closedir(dir); DEBUG_PRINT("Total matches found: %d\n", *match_count); } char *filename_completion_function(const char *text, int state) { (void)state; // 明确标记未使用参数 DEBUG_PRINT("--- NEW COMPLETION CALL (text='%s') ---\n", text); /* 自主状态决策 */ completion_mode mode; if (comp_state.last_returned == NULL) { mode = MODE_NEW; DEBUG_PRINT("Mode set to NEW (first completion)\n"); } else if (strchr(comp_state.last_returned, '/') && strncmp(text, comp_state.last_returned, strlen(comp_state.last_returned)) == 0) { mode = MODE_DIR_APPEND; DEBUG_PRINT("Mode set to DIR_APPEND (text='%s', last='%s')\n", text, comp_state.last_returned); } else if (strcmp(text, comp_state.last_attempt) == 0) { mode = MODE_CYCLE; DEBUG_PRINT("Mode set to CYCLE (same input)\n"); } else { mode = MODE_NEW; DEBUG_PRINT("Mode set to NEW (different input)\n"); } /* 处理模式 */ switch (mode) { case MODE_NEW: reset_completion_state(); parse_input_text(text); comp_state.is_virtual_path = (strncmp(text, "flash:/", 7) == 0); DEBUG_PRINT("Virtual path detected: %d\n", comp_state.is_virtual_path); generate_current_dir_paths(comp_state.current_path, comp_state.prefix, comp_state.is_virtual_path, comp_state.matches, &comp_state.match_count); comp_state.current_index = 0; break; case MODE_DIR_APPEND: { strncpy(comp_state.current_path, comp_state.last_returned, MAX_PATH_LEN); strncpy(comp_state.prefix, text + strlen(comp_state.last_returned), MAX_PATH_LEN); DEBUG_PRINT("DIR_APPEND: path='%s' new_prefix='%s'\n", comp_state.current_path, comp_state.prefix); generate_current_dir_paths(comp_state.current_path, comp_state.prefix, comp_state.is_virtual_path, comp_state.matches, &comp_state.match_count); comp_state.current_index = 0; break; } case MODE_CYCLE: comp_state.current_index = (comp_state.current_index + 1) % comp_state.match_count; DEBUG_PRINT("CYCLE: new index=%d/%d\n", comp_state.current_index, comp_state.match_count); break; } /* 返回结果 */ char *ret = NULL; if (comp_state.current_index < comp_state.match_count) { ret = XSTRDUP(MTYPE_TMP, comp_state.matches[comp_state.current_index]); DEBUG_PRINT("Returning match[%d]: %s\n", comp_state.current_index, ret); /* 更新状态 */ XFREE(MTYPE_TMP, comp_state.last_returned); comp_state.last_returned = XSTRDUP(MTYPE_TMP, ret); strncpy(comp_state.last_attempt, text, MAX_PATH_LEN-1); } else { DEBUG_PRINT("No valid matches to return\n"); } return ret; } 打印如下: <dahua>dir e 【cmlsh_completion】text = e [MATCHES] Called with text='e' [DEBUG] filename_completion_function:1866: --- NEW COMPLETION CALL (text='e') --- [DEBUG] filename_completion_function:1872: Mode set to NEW (first completion) [DEBUG] reset_completion_state:1772: Resetting completion state (had 0 matches) [DEBUG] parse_input_text:1785: Parsing input text: 'e' [DEBUG] parse_input_text:1799: No slash, prefix='e' [DEBUG] filename_completion_function:1895: Virtual path detected: 0 [DEBUG] generate_current_dir_paths:1817: Scanning dir: base='' prefix='e' (virtual=0, mapped='') [DEBUG] generate_current_dir_paths:1830: Skipping hidden file: '.' [DEBUG] generate_current_dir_paths:1830: Skipping hidden file: '..' [DEBUG] generate_current_dir_paths:1836: Skipping non-matching: 'fa' (prefix='e') [DEBUG] generate_current_dir_paths:1836: Skipping non-matching: 'cfg' (prefix='e') [DEBUG] generate_current_dir_paths:1850: Found directory: etc/ [DEBUG] generate_current_dir_paths:1857: Added match[0]: etc/ [DEBUG] generate_current_dir_paths:1830: Skipping hidden file: '.trash' [DEBUG] generate_current_dir_paths:1836: Skipping non-matching: 'home' (prefix='e') [DEBUG] generate_current_dir_paths:1836: Skipping non-matching: 'logfile' (prefix='e') [DEBUG] generate_current_dir_paths:1836: Skipping non-matching: 'installers' (prefix='e') [DEBUG] generate_current_dir_paths:1836: Skipping non-matching: 'main_uImage' (prefix='e') [DEBUG] generate_current_dir_paths:1852: Found file: ea.txt [DEBUG] generate_current_dir_paths:1857: Added match[1]: ea.txt [DEBUG] generate_current_dir_paths:1861: Total matches found: 2 [DEBUG] filename_completion_function:1925: Returning match[0]: etc/ [MATCHES] Returning single match: 'etc/' tc/ 【cmlsh_completion】text = etc/ [MATCHES] Called with text='etc/' [DEBUG] filename_completion_function:1866: --- NEW COMPLETION CALL (text='etc/') --- [DEBUG] filename_completion_function:1877: Mode set to DIR_APPEND (text='etc/', last='etc/') [DEBUG] filename_completion_function:1905: DIR_APPEND: path='etc/' new_prefix='' [DEBUG 会有异常退出,分析上边的打印与代码逻辑,修复问题

翻译: Command-Line Format --bind-address=addr System Variable bind_address Scope Global Dynamic No SET_VAR Hint Applies No Type String Default Value * The MySQL server listens on one or more network sockets for TCP/IP connections. Each socket is bound to one address, but it is possible for an address to map onto multiple network interfaces. To specify how the server should listen for TCP/IP connections, set the bind_address system variable at server startup. The server also has an admin_address system variable that enables administrative connections on a dedicated interface. See Section 7.1.12.1, “Connection Interfaces”. If bind_address is specified, it accepts a list of one or more address values, each of which may specify a single non-wildcard IP address or host name. Each address may include a network namespace specifier. If only one address is specified, it may make use of one of the wildcard address formats that permit listening on multiple network interfaces (*, 0.0.0.0, or ::). Multiple addresses are separated by commas. When multiple values are listed, each value must specify a single non-wildcard IP address (either IPv4 or IPv6) or a host name, and wildcard address formats (*, 0.0.0.0, or ::) are not allowed. IP addresses can be specified as IPv4 or IPv6 addresses. For any value that is a host name, the server resolves the name to an IP address and binds to that address. If a host name resolves to multiple IP addresses, the server uses the first IPv4 address if there are any, or the first IPv6 address otherwise. The server treats different types of addresses as follows: If the address is *, the server accepts TCP/IP connections on all server host IPv4 interfaces, and, if the server host supports IPv6, on all IPv6 interfaces. Use this address to permit both IPv4 and IPv6 connections on all server interfaces. This value is the default. If the variable specifies a list of multiple values, this value is not permitted. If the address is 0.0.0.0, the server accepts TCP/IP connections on all server host IPv4 interfaces. If the variable specifies a list of multiple values, this value is not permitted. If the address is ::, the server accepts TCP/IP connections on all server host IPv4 and IPv6 interfaces. If the variable specifies a list of multiple values, this value is not permitted. If the address is an IPv4-mapped address, the server accepts TCP/IP connections for that address, in either IPv4 or IPv6 format. For example, if the server is bound to ::ffff:127.0.0.1, clients can connect using --host=127.0.0.1 or --host=::ffff:127.0.0.1. If the address is a “regular” IPv4 or IPv6 address (such as 127.0.0.1 or ::1), the server accepts TCP/IP connections only for that IPv4 or IPv6 address. These rules apply to specifying a network namespace for an address: A network namespace can be specified for an IP address or a host name. A network namespace cannot be specified for a wildcard IP address. For a given address, the network namespace is optional. If given, it must be specified as a /ns suffix immediately following the address. An address with no /ns suffix uses the host system global namespace. The global namespace is therefore the default. An address with a /ns suffix uses the namespace named ns. The host system must support network namespaces and each named namespace must previously have been set up. Naming a nonexistent namespace produces an error. If the variable value specifies multiple addresses, it can include addresses in the global namespace, in named namespaces, or a mix. For additional information about network namespaces, see Section 7.1.14, “Network Namespace Support”. If binding to any address fails, the server produces an error and does not start. Examples: bind_address=* The server listens on all IPv4 or IPv6 addresses, as specified by the * wildcard. bind_address=198.51.100.20 The server listens only on the 198.51.100.20 IPv4 address. bind_address=198.51.100.20,2001:db8:0:f101::1 The server listens on the 198.51.100.20 IPv4 address and the 2001:db8:0:f101::1 IPv6 address. bind_address=198.51.100.20,* This produces an error because wildcard addresses are not permitted when bind_address names a list of multiple values. bind_address=198.51.100.20/red,2001:db8:0:f101::1/blue,192.0.2.50 The server listens on the 198.51.100.20 IPv4 address in the red namespace, the 2001:db8:0:f101::1 IPv6 address in the blue namespace, and the 192.0.2.50 IPv4 address in the global namespace. When bind_address names a single value (wildcard or non-wildcard), the server listens on a single socket, which for a wildcard address may be bound to multiple network interfaces. When bind_address names a list of multiple values, the server listens on one socket per value, with each socket bound to a single network interface. The number of sockets is linear with the number of values specified. Depending on operating system connection-acceptance efficiency, long value lists might incur a performance penalty for accepting TCP/IP connections. Because file descriptors are allocated for listening sockets and network namespace files, it may be necessary to increase the open_files_limit system variable. If you intend to bind the server to a specific address, be sure that the mysql.user system table contains an account with administrative privileges that you can use to connect to that address. Otherwise, you cannot shut down the server. For example, if you bind the server to *, you can connect to it using all existing accounts. But if you bind the server to ::1, it accepts connections only on that address. In that case, first make sure that the 'root'@'::1' account is present in the mysql.user table so you can still connect to the server to shut it down.

【@echo off setlocal EnableDelayedExpansion REM 配置目标根目录(按需修改) set "TARGET_ROOT=C:\Target" echo 请将文件拖到本窗口,然后按回车继续... set /p "file_list=拖入文件后按回车: " REM 特殊字符转义增强模块 set "file_list=!file_list:"=!" set "file_list=!file_list:^=^^!" set "file_list=!file_list:&=^&!" set "file_list=!file_list:|=^|!" for %%F in (!file_list!) do ( set "full_path=%%~F" REM 网络路径智能处理 if "!full_path:~0,2!"=="\\" ( echo 正在验证网络路径访问... set "parent_dir=%%~dpF" set "parent_dir=!parent_dir:~0,-1!" REM 映射父目录而非文件路径 pushd "!parent_dir!" 2>nul || ( echo [错误] 网络路径访问失败:!parent_dir! goto :nextfile ) set "mapped_drive=!cd!" set "full_path=!mapped_drive!\%%~nxF" popd ) REM 增强路径截断逻辑 set "search_marker=\STD-PARMS\" set "path_var=!full_path:%search_marker%=#MARKER#!" for /f "tokens=2 delims=#" %%A in ("!path_var!") do ( set "relative_path=%%A" ) if not defined relative_path ( echo [错误] 路径未包含STD-PARMS:%%~nxF ) else ( REM 构建目标路径 set "relative_path=!relative_path:\%%~nxF=!" set "dest_dir=!TARGET_ROOT!\!relative_path!" REM 创建目录(带路径验证) if not exist "!full_path!" ( echo [错误] 源文件不存在:%%~nxF goto :nextfile ) mkdir "!dest_dir!" 2>nul || ( echo [错误] 目录创建失败:!dest_dir! goto :nextfile ) REM 执行文件复制 copy /Y "!full_path!" "!dest_dir!\" >nul && ( echo [成功] 已复制到:!dest_dir!\ ) || ( echo [错误] 复制失败:%%~nxF ) ) :nextfile set "relative_path=" ) echo 操作执行完毕,按任意键退出... pause >nul】 提示:拖入文件后按回车: \\10.50.64.12\std\vt_std_lib\STD-PARMS\AIRTAC\chuanganqi\DMSG\DMSG.prt 正在映射网络路径:\\10.50.64.12\std\vt_std_lib\STD-PARMS\AIRTAC\chuanganqi\DMSG\ [错误] 源文件不存在:DMSG.prt 操作执行完毕,按任意键退出...

from __future__ import annotations from datetime import datetime from typing import List, Optional from uuid import UUID, uuid4 from pydantic import EmailStr from sqlmodel import Field, Relationship, SQLModel from sqlalchemy.orm import Mapped, mapped_column # --------------------------- # 用户模型 # --------------------------- class UserBase(SQLModel): email: EmailStr = Field(unique=True, index=True, max_length=255) is_active: bool = True is_superuser: bool = False full_name: Optional[str] = Field(default=None, max_length=255) class UserCreate(UserBase): password: str = Field(min_length=8, max_length=40) class UserRegister(SQLModel): email: EmailStr = Field(max_length=255) password: str = Field(min_length=8, max_length=40) full_name: Optional[str] = Field(default=None, max_length=255) class UserUpdate(SQLModel): email: Optional[EmailStr] = Field(default=None, max_length=255) full_name: Optional[str] = Field(default=None, max_length=255) password: Optional[str] = Field(default=None, min_length=8, max_length=40) class UserUpdateMe(SQLModel): email: Optional[EmailStr] = Field(default=None, max_length=255) full_name: Optional[str] = Field(default=None, max_length=255) class UpdatePassword(SQLModel): current_password: str = Field(min_length=8, max_length=40) new_password: str = Field(min_length=8, max_length=40) class UserBarFollow(SQLModel, table=True): __tablename__ = "user_bar_follow" user_id: UUID = Field(foreign_key="user.id", primary_key=True) bar_id: UUID = Field(foreign_key="bar.bar_id", primary_key=True) created_at: datetime = Field(default_factory=datetime.utcnow) class User(UserBase, table=True): id: UUID = Field(default_factory=uuid4, primary_key=True) hashed_password: str = Field(nullable=False) created_posts: List["Post"] = Relationship(back_populates="author") created_replies: List["Reply"] = Relationship(back_populates="author") created_bars: List["Bar"] = Relationship(back_populates="creator") followed_bars: List["Bar"] = Relationship(back_populates="followers", link_model=UserBarFollow) class UserPublic(UserBase): id: UUID class UsersPublic(SQLModel): data: List[UserPublic] count: int # --------------------------- # 吧模型 # --------------------------- class BarBase(SQLModel): name: str = Field(..., min_length=1, max_length=50) description: str = Field(..., min_length=1, max_length=500) follower_count: int = Field(default=0, ge=0) post_count: int = Field(default=0, ge=0) creator_id: UUID is_official: bool = Field(default=False) class BarCreate(BarBase): pass class BarUpdate(SQLModel): name: Optional[str] = Field(None, min_length=1, max_length=255) description: Optional[str] = Field(None, min_length=1, max_length=255) follower_count: Optional[int] = Field(None, ge=0) class Bar(SQLModel, table=True): bar_id: UUID = Field(default_factory=uuid4, primary_key=True) name: str = Field(..., min_length=1, max_length=50) description: str = Field(..., min_length=1, max_length=500) follower_count: int = Field(default=0, ge=0) post_count: int = Field(default=0, ge=0) creator_id: UUID = Field(foreign_key="user.id") is_official: bool = Field(default=False) created_at: datetime = Field(default_factory=datetime.utcnow) creator: "User" = Relationship(back_populates="created_bars") posts: List["Post"] = Relationship(back_populates="bar") followers: List["User"] = Relationship(back_populates="followed_bars", link_model=UserBarFollow) class BarPublic(BarBase): bar_id: UUID # --------------------------- # 帖子模型 # --------------------------- class PostBase(SQLModel): title: str = Field(..., max_length=100) content: str = Field(..., max_length=5000) is_top: bool = Field(default=False) like_count: int = Field(default=0, ge=0) reply_count: int = Field(default=0, ge=0) view_count: int = Field(default=0, ge=0) class PostCreate(PostBase): pass class PostUpdate(SQLModel): title: Optional[str] = Field(None, max_length=100) content: Optional[str] = Field(None, max_length=5000) is_top: Optional[bool] = None class Post(SQLModel, table=True): post_id: UUID = Field(default_factory=uuid4, primary_key=True) bar_id: UUID = Field(foreign_key="bar.bar_id") author_id: UUID = Field(foreign_key="user.id") title: str = Field(..., max_length=100) content: str = Field(..., max_length=5000) created_at: datetime = Field(default_factory=datetime.utcnow) updated_at: datetime = Field(default_factory=datetime.utcnow) bar: "Bar" = Relationship(back_populates="posts") author: "User" = Relationship(back_populates="created_posts") class PostPublic(PostBase): post_id: UUID bar_id: UUID author_id: UUID created_at: datetime updated_at: datetime # --------------------------- # 回复模型 # --------------------------- class ReplyBase(SQLModel): content: str = Field(..., max_length=2000) class ReplyCreate(ReplyBase): pass class ReplyUpdate(SQLModel): content: Optional[str] = Field(None, max_length=2000) class Reply(SQLModel, table=True): reply_id: UUID = Field(default_factory=uuid4, primary_key=True) post_id: UUID = Field(foreign_key="post.post_id") author_id: UUID = Field(foreign_key="user.id") content: str = Field(..., max_length=2000) created_at: datetime = Field(default_factory=datetime.utcnow) post: "Post" = Relationship(back_populates="replies") author: "User" = Relationship(back_populates="created_replies") class ReplyPublic(ReplyBase): reply_id: UUID post_id: UUID author_id: UUID created_at: datetime # --------------------------- # 其他模型 # --------------------------- class Message(SQLModel): message: str class Token(SQLModel): access_token: str token_type: str = "bearer" class TokenPayload(SQLModel): sub: Optional[str] = None class NewPassword(SQLModel): token: str new_password: str = Field(min_length=8, max_length=40) 报错PS C:\Users\林林子\Desktop\full-stack-fastapi-template-0.8.0\backend> docker logs full-stack-fastapi-template-080-prestart-1 + python app/backend_pre_start.py /app/app/core/config.py:105: UserWarning: The value of SECRET_KEY is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) /app/app/core/config.py:105: UserWarning: The value of POSTGRES_PASSWORD is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) /app/app/core/config.py:105: UserWarning: The value of FIRST_SUPERUSER_PASSWORD is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) INFO:__main__:Initializing service INFO:__main__:Starting call to '__main__.init', this is the 1st time calling it. INFO:__main__:Service finished initializing + alembic upgrade head /app/app/core/config.py:105: UserWarning: The value of SECRET_KEY is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) /app/app/core/config.py:105: UserWarning: The value of POSTGRES_PASSWORD is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) /app/app/core/config.py:105: UserWarning: The value of FIRST_SUPERUSER_PASSWORD is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. + python app/initial_data.py /app/app/core/config.py:105: UserWarning: The value of SECRET_KEY is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) /app/app/core/config.py:105: UserWarning: The value of POSTGRES_PASSWORD is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) /app/app/core/config.py:105: UserWarning: The value of FIRST_SUPERUSER_PASSWORD is "changethis", for security, please change it, at least for deployments. warnings.warn(message, stacklevel=1) INFO:__main__:Creating initial data Traceback (most recent call last): File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/clsregistry.py", line 516, in _resolve_name rval = d[token] File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/util/_collections.py", line 345, in __missing__ self[key] = val = self.creator(key) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/clsregistry.py", line 484, in _access_cls return self.fallback[key] KeyError: "List['Post']" The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/app/app/initial_data.py", line 23, in <module> main() File "/app/app/initial_data.py", line 18, in main init() File "/app/app/initial_data.py", line 13, in init init_db(session) File "/app/app/core/db.py", line 24, in init_db user = session.exec( File "/app/.venv/lib/python3.10/site-packages/sqlmodel/orm/session.py", line 66, in exec results = super().execute( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2362, in execute return self._execute_internal( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2247, in _execute_internal result: Result[Any] = compile_state_cls.orm_execute_statement( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/context.py", line 305, in orm_execute_statement result = conn.execute( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1418, in execute return meth( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection return connection._execute_clauseelement( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1632, in _execute_clauseelement compiled_sql, extracted_params, cache_hit = elem._compile_w_cache( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 703, in _compile_w_cache compiled_sql = self._compiler( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 316, in _compiler return dialect.statement_compiler(dialect, self, **kw) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 1429, in __init__ Compiled.__init__(self, dialect, statement, **kwargs) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 870, in __init__ self.string = self.process(self.statement, **compile_kwargs) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 915, in process return obj._compiler_dispatch(self, **kwargs) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 141, in _compiler_dispatch return meth(self, **kw) # type: ignore # noqa: E501 File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 4679, in visit_select compile_state = select_stmt._compile_state_factory( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/sql/base.py", line 683, in create_for_statement return klass.create_for_statement(statement, compiler, **kw) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/context.py", line 1110, in create_for_statement _QueryEntity.to_compile_state( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/context.py", line 2565, in to_compile_state _MapperEntity( File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/context.py", line 2645, in __init__ entity._post_inspect File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1253, in __get__ obj.__dict__[self.__name__] = result = self.fget(obj) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/mapper.py", line 2711, in _post_inspect self._check_configure() File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/mapper.py", line 2388, in _check_configure _configure_registries({self.registry}, cascade=True) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/mapper.py", line 4204, in _configure_registries _do_configure_registries(registries, cascade) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/mapper.py", line 4245, in _do_configure_registries mapper._post_configure_properties() File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/mapper.py", line 2405, in _post_configure_properties prop.init() File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/interfaces.py", line 584, in init self.do_init() File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/relationships.py", line 1642, in do_init self._setup_entity() File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/relationships.py", line 1854, in _setup_entity self._clsregistry_resolve_name(argument)(), File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/clsregistry.py", line 520, in _resolve_name self._raise_for_name(name, err) File "/app/.venv/lib/python3.10/site-packages/sqlalchemy/orm/clsregistry.py", line 491, in _raise_for_name raise exc.InvalidRequestError( sqlalchemy.exc.InvalidRequestError: When initializing mapper Mapper[User(user)], expression "relationship("List['Post']")" seems to be using a generic class as the argument to relationship(); please state the generic argument using an annotation, e.g. "created_posts: Mapped[List['Post']] = relationship()"

最新推荐

recommend-type

supervisor-4.2.2-1.el8.tar.gz

# 适用操作系统:Centos8 #Step1、解压 tar -zxvf xxx.el8.tar.gz #Step2、进入解压后的目录,执行安装 sudo rpm -ivh *.rpm
recommend-type

一步到位:Blender Flamenco服务器安装及使用指南

### 知识点 #### 1. Flamenco Server for Blender简介 Flamenco是一个与Blender集成的分布式渲染解决方案,它允许艺术家和动画师将渲染工作分配到多台计算机上进行处理,以提高渲染效率。Flamenco Server是该解决方案的核心组件,它负责管理和分配任务给多个Flamenco Workers。 #### 2. 安装Flamenco Server的重要性 安装Flamenco Server对于需要自托管分布式渲染环境的用户来说至关重要。这允许用户完全控制渲染集群的操作,并可根据特定需求进行定制。例如,用户可能需要特定的软件版本或渲染硬件配置,这在使用第三方服务时可能无法满足。 #### 3. Flamenco Server的使用案例 文档提到了许多自托管Flamenco Server的用例,但同时也指出大多数用户可能会觉得Blender Cloud提供的服务已经足够。Blender Cloud是一个支持Blender的订阅服务,它包含用于渲染和其他Blender相关服务的云基础设施。 #### 4. 支持Blender Foundation 文档鼓励用户在安装Flamenco Server前考虑对Blender进行一次性捐赠。Blender Foundation是一个非营利组织,支持开源3D创作套件Blender的开发和维护。通过捐赠,用户不仅能够帮助维持和改进Blender及其相关项目,还能得到Blender Foundation支持的专家的帮助。 #### 5. 安装指南的详细步骤 文档承诺提供完整的逐步指南来安装Flamenco Server所需的各个组件。这可能包括软件依赖、环境配置、网络设置等。对于初学者来说,从头开始设置可能具有挑战性,因此文档建议用户先使用Blender Cloud的服务器进行实践和学习。 #### 6. Sybren博士的建议 Sybren博士是一位在Flamenco项目中具有权威的开发者,他在文档中给出了建议:即使用户有意向自托管,也建议先尝试使用Blender Cloud服务器以获得相关经验。这说明Blender Cloud不仅仅是一个快捷方便的选择,也是学习和了解Flamenco工作流程的有效途径。 #### 7. 文档的实用价值 本安装指南对于那些想要深入学习和掌握Blender以及Flamenco分布式渲染技术的用户来说具有极高的实用价值。尽管文档本身没有提供标签信息,其内容的实用性和指导性对于目标用户群体来说十分关键。 #### 8. 对Blender Cloud订阅的支持意义 Blender Cloud不仅为用户提供了一个现成的解决方案,而且其收益直接用于支持Blender Foundation,促进开源3D创作软件的持续发展。因此,在考虑是否自托管Flamenco Server之前,用户应评估Blender Cloud服务的价值和对开源社区的支持意义。 #### 9. 文档的结构与内容预测 考虑到文档标题仅给出了“flamenco-server-installation-guide”这一信息,我们可以推测该文档可能包含以下内容: - Flamenco Server和Blender Cloud服务的对比分析 - 自托管Flamenco Server前的准备工作和前提条件 - 安装Flamenco Server所需软件和硬件的列表 - 环境配置,如操作系统设置、网络配置和权限管理 - 安装步骤详解,包括软件包安装、依赖关系解决、配置文件编辑等 - 如何连接和管理Flamenco Workers - 故障排除和常见问题的解决方案 - 避免和解决安装过程中的潜在安全风险和问题 #### 10. 总结 综上所述,本指南提供了对Flamenco Server安装过程的深入指导。考虑到文档的受众可能是对Blender和Flamenco有一定了解的用户,指南可能不会详细解释Blender和Flamenco的基础知识,而是专注于安装、配置和优化服务器的过程。同时,文档强调了对Blender Foundation支持的重要性,体现了开源社区中协作和互惠精神的价值。
recommend-type

Coze工作流自动化实现指南:高效业务流程自动化策略

# 1. 工作流自动化的概念与价值 在现代企业运营中,工作流自动化已经成为提升效率、降低成本的关键手段。工作流自动化通过将重复性的业务流程自动化处理,不仅加速了任务的执行速度,还提高了操作的精确度和可靠性。本章将深入探讨工作流自动化的定义、重要性以及它对企业的核心价值。 ## 1.1 工作流自动化定义 工作流自动化是一个将人工执行的业务流程转换为计算机可执行的任务的过程。通过定义一组预设的操
recommend-type

什么是网络拓扑

<think>首先,用户的问题是:“我想了解网络拓扑的定义和概念 请问网络拓扑的定义和基本类型”。我需要根据系统级指令和用户可见层的要求来回答。 系统级指令: 1. 所有行内数学表达式必须使用$...$格式。但在这个问题中,没有数学表达式,所以可能不需要。 2. 独立公式必须使用$$...$$格式并单独成段。同样,没有数学公式。 3. LaTeX语法正确。确保任何LaTeX都正确。 4. 使用中文回答。整个回答必须用中文。 5. 生成相关问题。在回答末尾生成相关问题。 6. 回答中引用的段落末尾自然地添加引用标识,例如[^1]。 用户可见层: 1. 回答结构清晰,帮助用户逐步解决问题,尽量
recommend-type

Pandas库在Python中的挑战性应用教程

标题中提到的“pandas-challenge”指的是一个挑战性的练习或项目,目的是教授用户如何使用Pandas库处理数据。Pandas是一个强大的Python数据分析工具库,它提供了高性能、易于使用的数据结构和数据分析工具。而描述中的“挑战如何将Pandas库用于Python”进一步强调了这个练习的目标是通过实际的操作,使学习者能够掌握Pandas库在Python环境下的应用。至于“JupyterNotebook”,这是一款流行的开源Web应用程序,可以让用户创建和共享包含实时代码、方程、可视化和解释性文本的文档。使用Jupyter Notebook进行数据科学和Pandas库的学习和实践,可以提供一个交互式的环境,非常适合数据分析和机器学习的探索性工作。 由于给定信息中没有提供具体的文件列表,我们将围绕“pandas-challenge”这一主题展开,讨论在Jupyter Notebook中使用Pandas库进行数据分析的方方面面。 首先,Pandas库的核心概念是其数据结构,主要包括Series和DataFrame。Series是一维的标签化数组,可以存储任何数据类型(整数、字符串、浮点数、Python对象等)。DataFrame则是二维的标签化数据结构,可以看做是一个表格或者说是Series对象的容器。Pandas正是依赖这两个数据结构来完成大部分的数据操作和分析任务。 在Jupyter Notebook中使用Pandas进行数据操作,通常包括以下几个步骤: 1. 数据读取:使用Pandas的读取函数将数据加载到DataFrame中。数据源可以是CSV文件、Excel表格、SQL数据库、JSON文件等。例如,读取CSV文件的代码为`df = pd.read_csv('filename.csv')`。 2. 数据探索:在进行数据分析之前,通常需要了解数据的结构、内容以及缺失值情况等。Pandas提供了大量方法来探索数据,如`df.head()`, `df.tail()`, `df.info()`, `df.describe()`等。 3. 数据清洗:数据往往需要经过清洗才能用于分析。Pandas提供了丰富的数据处理功能,包括处理缺失数据、合并数据、数据过滤、数据转换等。例如,填充缺失值可以用`df.fillna(value)`方法,删除含有缺失值的行或列可以使用`df.dropna()`方法。 4. 数据处理:Pandas的数据处理能力非常强大,能够对数据进行切片、筛选、分组、聚合等操作。例如,可以使用`.loc[]`和`.iloc[]`进行行和列的选择,使用`groupby()`进行分组聚合。 5. 数据可视化:Pandas支持与Matplotlib等可视化库集成,方便用户绘制图表。例如,可以使用`df.plot()`方法快速绘制数据的折线图、柱状图等。 6. 数据分析:Pandas还内置了许多统计函数,可以方便地进行描述性统计分析。比如,可以使用`df.mean()`, `df.max()`, `df.min()`, `df.std()`等方法来获取数据的统计信息。 在Jupyter Notebook中,所有的代码都可以在一个单元格内执行,并且支持Markdown语言进行文档的书写和格式化。这意味着学习者不仅可以通过运行代码学习Pandas的使用,还可以将学习笔记和代码注释整合在同一个文档中,提高学习效率。 最后,针对文件名“pandas-challenge-main”,我们可以推测该文件可能是包含挑战性练习的主要文档或入口,可能是包含多个单元格的Jupyter Notebook文件,每个单元格中都包含用Pandas解决特定数据分析问题的代码和解释。 总结来说,pandas-challenge的核心是利用Pandas库在Python环境下进行数据处理和分析。通过在Jupyter Notebook中的实际操作,可以提升学习者在数据清洗、处理和可视化等方面的能力。
recommend-type

Coze工作流性能与故障处理手册:进阶教程优化指南

# 1. Coze工作流基础 ## 工作流系统简介 工作流系统是企业自动化执行业务流程的解决方案。Coze工作流是一种先进的工作流管理系统,以其灵活性和可扩展性在业界广受欢迎。它支持复杂的工作流设计,使企业能够定制和优化业务流程,提升效率和生产力。 ## 核心特性 Coze工作流的核心特性包括:高度可配置的流程设计、丰富的任务类型、集成多种数据源的能力以及强大的API接口。这些特性共同构成了Coz
recommend-type

Checking For Single Node Nets WARNING(ORCAP-1600): Net has fewer than two connections GND_POWER Checking For Unconnected Bus Nets Checking Physical Rules Checking Pins and Pin Connections -------------------------------------------------- Checking Schematic: SCHEMATIC1 -------------------------------------------------- INFO(ORCAP-2242): Checking Incorrect Pin Group Assignment Report for Invalid References Report for Duplicate References -------------------------------------------------- Checking Entire Design: DCDC电源设计 -------------------------------------------------- INFO(ORCAP-2211): Check High Speed Properties Syntax INFO(ORCAP-2212): Check Power Ground Mismatch 报错如何处理

<think>首先,用户的问题是关于ORCAD中电源网络检查的警告和错误信息,具体是ORCAP-1600、ORCAP-2211、ORCAP-2212错误,以及GND_POWER网络连接问题。用户提供了三个引用,我需要参考这些引用来回答问题,但不要直接复制;要自然地融入回答中。 关键引用: - 引用[1]:提到问题本质是原理图管脚型号的设定问题,电源管脚名称同时为NET名称,导致两个名称冲突。解决办法是确保器件的电源引脚标号和电源网络标号一致。 - 引用[2]:关于跨页符没有连接的错误(ORCAP-11022),解决方法是通过搜索或坐标定位信号。 - 引用[3]:关于Pin type conf
recommend-type

编码器机器人:新经济系统与Discord.js开发指南

标题 "coders-bots" 暗示了本段内容将主要围绕编程、机器人、以及与开发者相关的内容进行展开。具体的知识点可以细化为以下几个方面: 1. **编码器机器人**: - “编码器机器人”可能指的是一种自动化工具,它用于编码或转换数据格式。在计算机科学中,编码器可能指的是数据编码、转换、或者更具体地,指实现某种编码算法的软件程序。这类机器人或程序在数据处理、通信协议和加密解密等领域尤为常见。 2. **针对开发者的工具或平台**: - “Creado Para Coders”意指“为编码者创造”,这表明“coders-bots”是一个专门为开发者设计的工具或平台。这可能是提供给开发者使用的API、库、框架或模板,以帮助他们在开发过程中自动化某些任务或功能。 3. **新经济系统**: - 描述中的“新经济系统”可能指的是一个专门针对编码者的经济体系,或许是在某个平台或社区中,开发者可以通过贡献代码、共享知识、提供服务等形式赚取某种形式的收益。这类系统可能结合了区块链、加密货币等技术来实现。 4. **基于discord.js的指南**: - “基于discord.js是JavaScript克隆”指的可能是一个使用JavaScript语言,以discord.js库为基础的项目。discord.js是一个流行的Node.js模块,用于创建和管理Discord机器人。它允许开发者通过编写JavaScript代码来控制机器人,这些机器人可以执行各种任务,如自动化社区管理、游戏内互动、集成第三方服务等。 5. **Git仓库和安装指南**: - 文档提供了git克隆命令和后续的安装步骤(cd coders-bots和npm install),这表明该项目托管在GitHub上。开发者可以通过git来克隆整个项目到本地,然后通过Node.js的包管理工具npm来安装项目依赖。 6. **Discord.js社区支持**: - “Una comunada dispuesta ayudar和dodos sobre Discord.js”暗示这个项目背后的社区愿意帮助开发者学习和使用discord.js。这表明该项目可能拥有活跃的开发者社区,能够提供指导、技术支持、最佳实践分享等。 7. **控制台颜色代码**: - 描述中包含了控制台颜色代码的说明,这可能用于开发者在开发时需要自定义控制台输出的样式,提高代码的可读性和用户体验。例如,使用Bright、Dim、Underscore、Blink、Reverse、Hidden、FgBlack等控制台ANSI颜色代码来格式化文本输出。 【标签】为"JavaScript",这强化了前面提到的使用了discord.js库的事实,也表明整个项目是基于JavaScript语言的,而JavaScript是目前最流行的前端开发语言,也广泛用于后端开发(特别是通过Node.js)。 【压缩包子文件的文件名称列表】为"coders-bots-main",说明了整个项目的主要文件或入口文件可能名为"coders-bots-main"。这可能意味着这个项目可以被当作一个包进行管理,主文件中可能包含了机器人的主要逻辑和功能,开发者可以在这个文件的基础上进行定制和扩展。 综上所述,这些信息点为潜在的开发者揭示了一个可能在Discord社区中具有实用价值的编码者专用工具或平台的概貌,它通过JavaScript和discord.js库的结合,提供了一个利用新经济系统的机器人开发环境。
recommend-type

Coze工作流实战升级:从理论到实践的8大高效技巧

# 1. Coze工作流的基础概念与核心组成 在信息技术行业中,工作流是自动化业务过程的重要组成部分,它通过定义任务执行的顺序、规则和决策点,实现任务的协调和数据的传递。Coze工作流,作为一种先进的工作流管理系统,它的设计理念是让企业能够轻松地定义、执行和管理复杂的业务流程。 ## 1.1 Coze工作流的核心组成 Coze工作流由几个关键组件构成,它们共同工作以确保业务流
recommend-type

代码解释 ```c for (int i = 0; msg.text[i]; i++) { ```

<think>首先,用户提供了一个完整的C代码片段,并指定了一行代码:`for (int i = 0; msg.text[i]; i++) {`。我需要分析这行代码在整个代码段中的功能和作用。 回顾代码上下文:这行代码出现在线程函数`process_message`中。线程函数的作用是处理消息队列中的消息,并统计字符出现的频率。具体来说: - 在`process_message`函数中: - 它接收消息队列中的消息。 - 如果消息是"END",则退出循环。 - 否则,它遍历消息文本中的每个字符,并更新共享内存中的字符计数。 目标代码行是:`for (int i = 0; ms