活动介绍

bitwise_no

时间: 2023-12-08 14:03:58 浏览: 170
bitwise_not函数是OpenCV图像处理库中的一个函数,它将输入图像的每个像素的颜色取反。具体来说,该函数将输入图像中的每个像素的每个通道的像素值都取反,使得颜色从黑色变成白色,白色变成黑色。该函数的用法如下: bitwise_not(InputArray src, OutputArray dst) 其中,src是输入图像,dst是输出图像。函数将输入图像src的每个像素的每个通道的像素值取反,并将结果存储在输出图像dst中。 在C++中,使用OpenCV的bitwise_not函数可以实现对图像进行取反操作,即将图像中的黑色和白色颜色进行交换。例如,在以下代码片段中,我们使用bitwise_not函数对图像进行取反操作并显示结果: ```cpp #include <opencv2/opencv.hpp> int main() { cv::Mat image, image2; image = cv::imread("path/to/image.jpg"); cv::bitwise_not(image, image2); cv::imshow("result", image2); cv::waitKey(0); return 0; } ``` 这段代码读取了一张图像,然后使用bitwise_not函数对图像进行取反操作,并将结果显示在名为"result"的窗口中。请注意,这里的"path/to/image.jpg"应该替换为实际的图像文件路径。 另外,如果想要在阈值处理过程中进行取反操作,可以使用threshold函数的参数THRESH_BINARY_INV来实现,其效果与bitwise_not相同。具体用法请参考OpenCV的官方文档。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [opencv中的bitwise_not,bitwise_xor,bitwise_or,bitwise_and的使用方法与效果。](https://blog.csdn.net/zhushiq1234/article/details/51113671)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *3* [code_hdu.rar_ACM_The First_hdu_test case example](https://download.csdn.net/download/weixin_42651281/86645822)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
阅读全文

相关推荐

# -*- coding: gbk -*- import cv2 import torch import numpy as np import os def enhance_depth_estimation(img_path, mask_path): # 加载模型 midas = torch.hub.load("/home/lichuang/project/Opencv-main/sam_yolo/MiDaS/", "DPT_Large", source='local') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") midas.to(device) midas.eval() # 预处理 img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 mask = cv2.imread(mask_path, cv2.IMREAD_GRAYSCALE) # 确保 mask 中的值为 0 和 1,白色部分为 1 mask = (mask > 0).astype(np.uint8) # 应用 mask img_masked = cv2.bitwise_and(img, img, mask=mask) # 转换为模型输入格式,使用图像原始尺寸 input_batch = torch.from_numpy(img_masked).permute(2, 0, 1).unsqueeze(0).float().to(device) # 不再进行插值到 (384, 384) # 深度预测 with torch.no_grad(): prediction = midas(input_batch) prediction = torch.nn.functional.interpolate( prediction.unsqueeze(1), size=img.shape[:2], # 使用图像原始尺寸 mode="bicubic", align_corners=False ).squeeze().cpu().numpy() # 边缘修复 depth_map = (prediction - prediction.min()) / (prediction.max() - prediction.min()) inpainted_depth = cv2.inpaint((depth_map * 255).astype(np.uint8), 255 - mask, 3, cv2.INPAINT_TELEA) return inpainted_depth def process_folders(segmented_folder, mask_folder, output_folder): # 确保输出文件夹存在 os.makedirs(output_folder, exist_ok=True) # 递归遍历分割图文件夹 for root, _, files in os.walk(segmented_folder): for filename in files: if filename.endswith(('.png', '.jpg', '.jpeg')): img_path = os.path.join(root, filename) # 计算相对路径 relative_path = os.path.relpath(root, segmented_folder) mask_path = os.path.join(mask_folder, relative_path, filename) # 检查对应的掩码文件是否存在 if not os.path.exists(mask_path): print(f"未找到对应的掩码文件: {mask_path},跳过 {img_path}。") continue # 生成深度图 depth_map = enhance_depth_estimation(img_path, mask_path) # 构建输出路径,保持子文件夹结构 output_subfolder = os.path.join(output_folder, relative_path) os.makedirs(output_subfolder, exist_ok=True) output_path = os.path.join(output_subfolder, filename) # 保存深度图 cv2.imwrite(output_path, depth_map) print(f"已为 {img_path} 生成深度图并保存到 {output_path}。") if __name__ == "__main__": segmented_folder = "/home/lichuang/project/Opencv-main/sam_yolo/dataset/input_images/" # 替换为实际的分割图文件夹路径 mask_folder = "/home/lichuang/project/Opencv-main/sam_yolo/dataset/masks/" # 替换为实际的掩码文件夹路径 output_folder = "/home/lichuang/project/Opencv-main/sam_yolo/dataset/depth_maps1/" # 替换为实际的输出深度图文件夹路径 process_folders(segmented_folder, mask_folder, output_folder)修改深度图的生成代码,尝试解决此问题

module alu( input wire [11:0] alu_op, input wire [31:0] alu_src1, input wire [31:0] alu_src2, output wire [31:0] alu_result ); wire op_add; //add operation wire op_sub; //sub operation wire op_slt; //signed compared and set less than wire op_sltu; //unsigned compared and set less than wire op_and; //bitwise and wire op_nor; //bitwise nor wire op_or; //bitwise or wire op_xor; //bitwise xor wire op_sll; //logic left shift wire op_srl; //logic right shift wire op_sra; //arithmetic right shift wire op_lui; //Load Upper Immediate // control code decomposition assign op_add = alu_op[ 0]; assign op_sub = alu_op[ 1]; assign op_slt = alu_op[ 2]; assign op_sltu = alu_op[ 3]; assign op_and = alu_op[ 4]; assign op_nor = alu_op[ 5]; assign op_or = alu_op[ 6]; assign op_xor = alu_op[ 7]; assign op_sll = alu_op[ 8]; assign op_srl = alu_op[ 9]; assign op_sra = alu_op[10]; assign op_lui = alu_op[11]; wire [31:0] add_sub_result; wire [31:0] slt_result; wire [31:0] sltu_result; wire [31:0] and_result; wire [31:0] nor_result; wire [31:0] or_result; wire [31:0] xor_result; wire [31:0] lui_result; wire [31:0] sll_result; wire [63:0] sr64_result; wire [31:0] sr_result; // 32-bit adder wire [31:0] adder_a; wire [31:0] adder_b; wire adder_cin; wire [31:0] adder_result; wire adder_cout; assign adder_a = alu_src1; assign adder_b = (op_sub | op_slt | op_sltu) ? ~alu_src2 : alu_src2; //src1 - src2 rj-rk assign adder_cin = (op_sub | op_slt | op_sltu) ? 1'b1 : 1'b0; assign {adder_cout, adder_result} = adder_a + adder_b + adder_cin; // ADD, SUB result assign add_sub_result = adder_result; // SLT result assign slt_result[31:1] = 31'b0; //rj < rk 1 assign slt_result[0] = (alu_src1[31] & ~alu_src2[31]) | ((alu_src1[31] ~^ alu_src2[31]) & adder_result[31]); // SLTU result assign sltu_result[31:1] = 31'b0; assign sltu_result[0] = ~adder_cout; // bitwise operation assign and_result = alu_src1 & alu_src2; assign or_result = alu_src1 | alu_src2; assign nor_result = ~or_result; assign xor_result = alu_src1 ^ alu_src2; assign lui_result = alu_src2; // SLL result assign sll_result = alu_src1 << alu_src2[4:0]; //rj << ui5 // SRL, SRA result assign sr64_result = {{32{op_sra & alu_src1[31]}}, alu_src1[31:0]} >> alu_src2[4:0]; //rj >> i5 assign sr_result = sr64_result[31:0]; // final result mux assign alu_result = ({32{op_add|op_sub}} & add_sub_result) | ({32{op_slt }} & slt_result) | ({32{op_sltu }} & sltu_result) | ({32{op_and }} & and_result) | ({32{op_nor }} & nor_result) | ({32{op_or }} & or_result) | ({32{op_xor }} & xor_result) | ({32{op_lui }} & lui_result) | ({32{op_sll }} & sll_result) | ({32{op_srl|op_sra}} & sr_result); endmodule ############################################################ include "mycpu.h" module exe_stage( input clk , input reset , //allowin input ms_allowin , output es_allowin , //from ds input ds_to_es_valid, input [DS_TO_ES_BUS_WD -1:0] ds_to_es_bus , //to ms output es_to_ms_valid, output [ES_TO_MS_BUS_WD -1:0] es_to_ms_bus , // data sram interface(write) output data_sram_en , output [ 3:0] data_sram_we , output [31:0] data_sram_addr , output [31:0] data_sram_wdata, output [ 4:0] es_to_ds_dest , output es_to_ds_load_op ); reg es_valid ; wire es_ready_go ; reg [DS_TO_ES_BUS_WD -1:0] ds_to_es_bus_r; wire [11:0] alu_op ; wire es_load_op; wire src1_is_pc; wire src2_is_imm; wire src2_is_4; wire res_from_mem; wire dst_is_r1; wire gr_we; wire es_mem_we; wire [4: 0] dest; wire [31:0] rj_value; wire [31:0] rkd_value; wire [31:0] imm; wire [31:0] es_pc; assign {alu_op, es_load_op, src1_is_pc, src2_is_imm, src2_is_4, gr_we, es_mem_we, dest, imm, rj_value, rkd_value, es_pc, res_from_mem } = ds_to_es_bus_r; wire [31:0] alu_src1 ; wire [31:0] alu_src2 ; wire [31:0] alu_result ; // did't use in lab7 wire es_res_from_mem; assign es_res_from_mem = es_load_op; assign es_to_ms_bus = {res_from_mem, //70:70 1 gr_we , //69:69 1 dest , //68:64 5 alu_result , //63:32 32 es_pc //31:0 32 }; assign es_ready_go = 1'b1; assign es_allowin = !es_valid || es_ready_go && ms_allowin; assign es_to_ms_valid = es_valid && es_ready_go; always @(posedge clk) begin if (reset) begin es_valid <= 1'b0; end else if (es_allowin) begin es_valid <= ds_to_es_valid; end if (ds_to_es_valid && es_allowin) begin ds_to_es_bus_r <= ds_to_es_bus; end end assign alu_src1 = src1_is_pc ? es_pc : rj_value; assign alu_src2 = src2_is_imm ? imm : rkd_value; alu u_alu( .alu_op (alu_op ), .alu_src1 (alu_src1 ), .alu_src2 (alu_src2 ), .alu_result (alu_result) ); assign data_sram_en = 1'b1; assign data_sram_we = es_mem_we && es_valid ? 4'hf : 4'h0; assign data_sram_addr = alu_result; assign data_sram_wdata = rkd_value; assign es_to_ds_dest = dest & {5{es_valid}}; endmodule ############################################################# include "mycpu.h" module id_stage( input clk , input reset , //allowin input es_allowin , output ds_allowin , //from fs input fs_to_ds_valid, input [FS_TO_DS_BUS_WD -1:0] fs_to_ds_bus , //to es output ds_to_es_valid, output [DS_TO_ES_BUS_WD -1:0] ds_to_es_bus , //to fs output [BR_BUS_WD -1:0] br_bus , //to rf: for write back input [WS_TO_RF_BUS_WD -1:0] ws_to_rf_bus , input [4 :0] es_to_ds_dest , input [4 :0] ms_to_ds_dest , input [4 :0] ws_to_ds_dest , input es_to_ds_load_op ); wire br_taken; wire [31:0] br_target; wire [31:0] ds_pc; wire [31:0] ds_inst; reg ds_valid ; wire ds_ready_go; wire [11:0] alu_op; wire load_op; wire src1_is_pc; wire src2_is_imm; wire res_from_mem; wire dst_is_r1; wire gr_we; wire mem_we; wire src_reg_is_rd; wire [4: 0] dest; wire [31:0] rj_value; wire [31:0] rkd_value; wire [31:0] imm; wire [31:0] br_offs; wire [31:0] jirl_offs; wire [ 5:0] op_31_26; wire [ 3:0] op_25_22; wire [ 1:0] op_21_20; wire [ 4:0] op_19_15; wire [ 4:0] rd; wire [ 4:0] rj; wire [ 4:0] rk; wire [11:0] i12; wire [19:0] i20; wire [15:0] i16; wire [25:0] i26; wire [63:0] op_31_26_d; wire [15:0] op_25_22_d; wire [ 3:0] op_21_20_d; wire [31:0] op_19_15_d; wire inst_add_w; wire inst_sub_w; wire inst_slt; wire inst_sltu; wire inst_nor; wire inst_and; wire inst_or; wire inst_xor; wire inst_slli_w; wire inst_srli_w; wire inst_srai_w; wire inst_addi_w; wire inst_ld_w; wire inst_st_w; wire inst_jirl; wire inst_b; wire inst_bl; wire inst_beq; wire inst_bne; wire inst_lu12i_w; wire need_ui5; wire need_si12; wire need_si16; wire need_si20; wire need_si26; wire src2_is_4; wire [ 4:0] rf_raddr1; wire [31:0] rf_rdata1; wire [ 4:0] rf_raddr2; wire [31:0] rf_rdata2; wire rf_we ; wire [ 4:0] rf_waddr; wire [31:0] rf_wdata; wire [31:0] alu_src1 ; wire [31:0] alu_src2 ; wire [31:0] alu_result ; wire [31:0] mem_result; wire [31:0] final_result; wire inst_no_dest; wire src_no_rj; wire src_no_rk; wire src_no_rd; wire rj_wait; wire rk_wait; wire rd_wait; wire no_wait; wire br_stall; wire load_stall; assign op_31_26 = ds_inst[31:26]; assign op_25_22 = ds_inst[25:22]; assign op_21_20 = ds_inst[21:20]; assign op_19_15 = ds_inst[19:15]; assign rd = ds_inst[ 4: 0]; assign rj = ds_inst[ 9: 5]; assign rk = ds_inst[14:10]; assign i12 = ds_inst[21:10]; assign i20 = ds_inst[24: 5]; assign i16 = ds_inst[25:10]; assign i26 = {ds_inst[ 9: 0], ds_inst[25:10]}; decoder_6_64 u_dec0(.in(op_31_26 ), .out(op_31_26_d )); decoder_4_16 u_dec1(.in(op_25_22 ), .out(op_25_22_d )); decoder_2_4 u_dec2(.in(op_21_20 ), .out(op_21_20_d )); decoder_5_32 u_dec3(.in(op_19_15 ), .out(op_19_15_d )); assign inst_add_w = op_31_26_d[6'h00] & op_25_22_d[4'h0] & op_21_20_d[2'h1] & op_19_15_d[5'h00]; assign inst_sub_w = op_31_26_d[6'h00] & op_25_22_d[4'h0] & op_21_20_d[2'h1] & op_19_15_d[5'h02]; assign inst_slt = op_31_26_d[6'h00] & op_25_22_d[4'h0] & op_21_20_d[2'h1] & op_19_15_d[5'h04]; assign inst_sltu = op_31_26_d[6'h00] & op_25_22_d[4'h0] & op_21_20_d[2'h1] & op_19_15_d[5'h05]; assign inst_nor = op_31_26_d[6'h00] & op_25_22_d[4'h0] & op_21_20_d[2'h1] & op_19_15_d[5'h08]; assign inst_and = op_31_26_d[6'h00] & op_25_22_d[4'h0] & op_21_20_d[2'h1] & op_19_15_d[5'h09]; assign inst_or = op_31_26_d[6'h00] & op_25_22_d[4'h0] & op_21_20_d[2'h1] & op_19_15_d[5'h0a]; assign inst_xor = op_31_26_d[6'h00] & op_25_22_d[4'h0] & op_21_20_d[2'h1] & op_19_15_d[5'h0b]; assign inst_slli_w = op_31_26_d[6'h00] & op_25_22_d[4'h1] & op_21_20_d[2'h0] & op_19_15_d[5'h01]; assign inst_srli_w = op_31_26_d[6'h00] & op_25_22_d[4'h1] & op_21_20_d[2'h0] & op_19_15_d[5'h09]; assign inst_srai_w = op_31_26_d[6'h00] & op_25_22_d[4'h1] & op_21_20_d[2'h0] & op_19_15_d[5'h11]; assign inst_addi_w = op_31_26_d[6'h00] & op_25_22_d[4'ha]; assign inst_ld_w = op_31_26_d[6'h0a] & op_25_22_d[4'h2]; assign inst_st_w = op_31_26_d[6'h0a] & op_25_22_d[4'h6]; assign inst_jirl = op_31_26_d[6'h13]; assign inst_b = op_31_26_d[6'h14]; assign inst_bl = op_31_26_d[6'h15]; assign inst_beq = op_31_26_d[6'h16]; assign inst_bne = op_31_26_d[6'h17]; assign inst_lu12i_w= op_31_26_d[6'h05] & ~ds_inst[25]; assign alu_op[ 0] = inst_add_w | inst_addi_w | inst_ld_w | inst_st_w | inst_jirl | inst_bl; assign alu_op[ 1] = inst_sub_w; assign alu_op[ 2] = inst_slt; assign alu_op[ 3] = inst_sltu; assign alu_op[ 4] = inst_and; assign alu_op[ 5] = inst_nor; assign alu_op[ 6] = inst_or; assign alu_op[ 7] = inst_xor; assign alu_op[ 8] = inst_slli_w; assign alu_op[ 9] = inst_srli_w; assign alu_op[10] = inst_srai_w; assign alu_op[11] = inst_lu12i_w; assign need_ui5 = inst_slli_w | inst_srli_w | inst_srai_w; assign need_si12 = inst_addi_w | inst_ld_w | inst_st_w; assign need_si16 = inst_jirl | inst_beq | inst_bne; assign need_si20 = inst_lu12i_w; assign need_si26 = inst_b | inst_bl; assign src2_is_4 = inst_jirl | inst_bl; assign imm = src2_is_4 ? 32'h4 : need_si20 ? {i20[19:0], 12'b0} : need_ui5 ? rk : /*need_si12*/{{20{i12[11]}}, i12[11:0]} ; assign br_offs = need_si26 ? {{ 4{i26[25]}}, i26[25:0], 2'b0} : {{14{i16[15]}}, i16[15:0], 2'b0} ; assign jirl_offs = {{14{i16[15]}}, i16[15:0], 2'b0}; assign src_reg_is_rd = inst_beq | inst_bne | inst_st_w; assign src1_is_pc = inst_jirl | inst_bl; assign src2_is_imm = inst_slli_w | inst_srli_w | inst_srai_w | inst_addi_w | inst_ld_w | inst_st_w | inst_lu12i_w| inst_jirl | inst_bl ; assign res_from_mem = inst_ld_w; assign dst_is_r1 = inst_bl; assign gr_we = ~inst_st_w & ~inst_beq & ~inst_bne & ~inst_b; assign mem_we = inst_st_w; assign dest = dst_is_r1 ? 5'd1 : rd; assign rf_raddr1 = rj; assign rf_raddr2 = src_reg_is_rd ? rd :rk; regfile u_regfile( .clk (clk ), .raddr1 (rf_raddr1), .rdata1 (rf_rdata1), .raddr2 (rf_raddr2), .rdata2 (rf_rdata2), .we (rf_we ), .waddr (rf_waddr ), .wdata (rf_wdata ) ); assign rj_value = rf_rdata1; assign rkd_value = rf_rdata2; assign rj_eq_rd = (rj_value == rkd_value); assign br_taken = ( inst_beq && rj_eq_rd || inst_bne && !rj_eq_rd || inst_jirl || inst_bl || inst_b ) && ds_valid & no_wait; assign br_target = (inst_beq || inst_bne || inst_bl || inst_b) ? (ds_pc + br_offs) : /*inst_jirl*/ (rj_value + jirl_offs); assign br_stall = load_stall & br_taken & ds_valid; assign load_stall = es_to_ds_load_op & (((rj == es_to_ds_dest) & rj_wait) | ((rk == es_to_ds_dest) & rk_wait) | ((rd == es_to_ds_dest) & rd_wait)); assign br_bus = {br_stall,br_taken,br_target}; reg [FS_TO_DS_BUS_WD -1:0] fs_to_ds_bus_r; assign {ds_inst, ds_pc } = fs_to_ds_bus_r; assign {rf_we , //37:37 rf_waddr, //36:32 rf_wdata //31:0 } = ws_to_rf_bus; assign ds_to_es_bus = {alu_op , // 12 load_op , // 1 src1_is_pc , // 1 src2_is_imm , // 1 src2_is_4 , // 1 gr_we , // 1 mem_we , // 1 dest , // 5 imm , // 32 rj_value , // 32 rkd_value , // 32 ds_pc , // 32 res_from_mem }; assign ds_ready_go = no_wait; assign ds_allowin = !ds_valid || ds_ready_go && es_allowin; assign ds_to_es_valid = ds_valid && ds_ready_go; always @(posedge clk) begin if (reset) begin ds_valid <= 1'b0; end else if (ds_allowin) begin ds_valid <= fs_to_ds_valid; end if (fs_to_ds_valid && ds_allowin) begin fs_to_ds_bus_r <= fs_to_ds_bus; end end assign inst_no_dest = inst_st_w | inst_b | inst_beq | inst_bne; assign src_no_rj = inst_b | inst_bl | inst_lu12i_w; assign src_no_rk = inst_slli_w | inst_srli_w | inst_srai_w | inst_addi_w | inst_ld_w | inst_st_w | inst_jirl | inst_b | inst_bl | inst_beq | inst_bne | inst_lu12i_w; assign src_no_rd = ~inst_st_w & ~inst_beq & ~inst_bne; assign rj_wait = ~src_no_rj && (rj != 5'b00000) && ((rj == es_to_ds_dest) || (rj == ms_to_ds_dest) || (rj == ws_to_ds_dest)); assign rk_wait = ~src_no_rk && (rk != 5'b00000) && ((rk == es_to_ds_dest) || (rk == ms_to_ds_dest) || (rk == ws_to_ds_dest)); assign rd_wait = ~src_no_rd && (rd != 5'b00000) && ((rd == es_to_ds_dest) || (rd == ms_to_ds_dest) || (rd == ws_to_ds_dest)); assign no_wait = ~rj_wait & ~rk_wait & ~rd_wait; endmodule ########################################################## include "mycpu.h" module if_stage( input clk , input reset , //allwoin input ds_allowin , //brbus input [BR_BUS_WD -1:0] br_bus , //to ds output fs_to_ds_valid , output [FS_TO_DS_BUS_WD -1:0] fs_to_ds_bus , // inst sram interface output inst_sram_en , output [ 3:0] inst_sram_we , output [31:0] inst_sram_addr , output [31:0] inst_sram_wdata, input [31:0] inst_sram_rdata ); reg fs_valid; wire fs_ready_go; wire fs_allowin; wire to_fs_valid; wire [31:0] seq_pc; wire [31:0] nextpc; wire br_taken; wire [ 31:0] br_target; wire br_stall; wire pre_if_ready_go; assign {br_stall, br_taken, br_target} = br_bus; wire [31:0] fs_inst; reg [31:0] fs_pc; assign fs_to_ds_bus = {fs_inst , fs_pc }; // pre-IF stage assign to_fs_valid = ~reset && pre_if_ready_go; assign pre_if_ready_go = ~br_stall; // because after sending fs_pc to ds, the seq_pc = fs_pc + 4 immediately // Actually, the seq_pc is just a delay slot instruction // if we use inst pc, here need to -4, it's more troublesome assign seq_pc = fs_pc + 3'h4; assign nextpc = br_taken ? br_target : seq_pc; // IF stage assign fs_ready_go = ~br_taken; // ׼ assign fs_allowin = !fs_valid || fs_ready_go && ds_allowin; // ɽ ݣ assign fs_to_ds_valid = fs_valid && fs_ready_go; always @(posedge clk) begin if (reset) begin fs_valid <= 1'b0; end else if (fs_allowin) begin fs_valid <= to_fs_valid; // Ч end end always @(posedge clk) begin if (reset) begin fs_pc <= 32'h1bfffffc; //trick: to make nextpc be 0x1c000000 during reset end else if (to_fs_valid && (fs_allowin || br_taken)) begin // if taken is valid, to skip the delay slot instruction, next_pc should be the instruction after the jump inst fs_pc <= nextpc; end end assign inst_sram_en = to_fs_valid && (fs_allowin || br_taken) && pre_if_ready_go; assign inst_sram_we = 4'h0; assign inst_sram_addr = nextpc; assign inst_sram_wdata = 32'b0; assign fs_inst = inst_sram_rdata; endmodule ############################################################ include "mycpu.h" module mem_stage( input clk , input reset , //allowin input ws_allowin , output ms_allowin , //from es input es_to_ms_valid, input [ES_TO_MS_BUS_WD -1:0] es_to_ms_bus , //to ws output ms_to_ws_valid, output [MS_TO_WS_BUS_WD -1:0] ms_to_ws_bus , //from data-sram input [31 :0] data_sram_rdata, output [4 :0] ms_to_ds_dest ); reg ms_valid; wire ms_ready_go; reg [ES_TO_MS_BUS_WD -1:0] es_to_ms_bus_r; wire ms_res_from_mem; wire ms_gr_we; wire [ 4:0] ms_dest; wire [31:0] ms_alu_result; wire [31:0] ms_pc; wire [31:0] mem_result; wire [31:0] ms_final_result; assign {ms_res_from_mem, //70:70 ms_gr_we , //69:69 ms_dest , //68:64 ms_alu_result , //63:32 ms_pc //31:0 } = es_to_ms_bus_r; assign ms_to_ws_bus = {ms_gr_we , //69:69 ms_dest , //68:64 ms_final_result, //63:32 ms_pc //31:0 }; assign ms_ready_go = 1'b1; assign ms_allowin = !ms_valid || ms_ready_go && ws_allowin; assign ms_to_ws_valid = ms_valid && ms_ready_go; always @(posedge clk) begin if (reset) begin ms_valid <= 1'b0; end else if (ms_allowin) begin ms_valid <= es_to_ms_valid; end if (es_to_ms_valid && ms_allowin) begin es_to_ms_bus_r = es_to_ms_bus; end end assign mem_result = data_sram_rdata; assign ms_final_result = ms_res_from_mem ? mem_result : ms_alu_result; assign ms_to_ds_dest = ms_dest & {5{ms_valid}}; endmodule ################################################### include "mycpu.h" module mycpu_top( input wire clk, input wire resetn, // inst sram interface output wire inst_sram_en, output wire [ 3:0] inst_sram_we, output wire [31:0] inst_sram_addr, output wire [31:0] inst_sram_wdata, input wire [31:0] inst_sram_rdata, // data sram interface output wire data_sram_en, output wire [ 3:0] data_sram_we, output wire [31:0] data_sram_addr, output wire [31:0] data_sram_wdata, input wire [31:0] data_sram_rdata, // trace debug interface output wire [31:0] debug_wb_pc, output wire [ 3:0] debug_wb_rf_we, output wire [ 4:0] debug_wb_rf_wnum, output wire [31:0] debug_wb_rf_wdata ); reg reset; always @(posedge clk) reset <= ~resetn; wire ds_allowin; wire es_allowin; wire ms_allowin; wire ws_allowin; wire fs_to_ds_valid; wire ds_to_es_valid; wire es_to_ms_valid; wire ms_to_ws_valid; wire [FS_TO_DS_BUS_WD -1:0] fs_to_ds_bus; wire [DS_TO_ES_BUS_WD -1:0] ds_to_es_bus; wire [ES_TO_MS_BUS_WD -1:0] es_to_ms_bus; wire [MS_TO_WS_BUS_WD -1:0] ms_to_ws_bus; wire [WS_TO_RF_BUS_WD -1:0] ws_to_rf_bus; wire [BR_BUS_WD -1:0] br_bus; wire [4 :0] es_to_ds_dest; wire [4 :0] ms_to_ds_dest; wire [4 :0] ws_to_ds_dest; wire es_to_ds_load_op; // IF stage if_stage if_stage( .clk (clk ), .reset (reset ), //allowin .ds_allowin (ds_allowin ), //brbus .br_bus (br_bus ), //outputs .fs_to_ds_valid (fs_to_ds_valid ), .fs_to_ds_bus (fs_to_ds_bus ), // inst sram interface .inst_sram_en (inst_sram_en ), .inst_sram_we (inst_sram_we ), .inst_sram_addr (inst_sram_addr ), .inst_sram_wdata(inst_sram_wdata), .inst_sram_rdata(inst_sram_rdata) ); // ID stage id_stage id_stage( .clk (clk ), .reset (reset ), //allowin .es_allowin (es_allowin ), .ds_allowin (ds_allowin ), //from fs .fs_to_ds_valid (fs_to_ds_valid ), .fs_to_ds_bus (fs_to_ds_bus ), //to es .ds_to_es_valid (ds_to_es_valid ), .ds_to_es_bus (ds_to_es_bus ), //to fs .br_bus (br_bus ), //to rf: for write back .ws_to_rf_bus (ws_to_rf_bus ), //Hazard detection signals .es_to_ds_dest (es_to_ds_dest ), .ms_to_ds_dest (ms_to_ds_dest ), .ws_to_ds_dest (ws_to_ds_dest ), .es_to_ds_load_op(es_to_ds_load_op) ); // EXE stage exe_stage exe_stage( .clk (clk ), .reset (reset ), //allowin .ms_allowin (ms_allowin ), .es_allowin (es_allowin ), //from ds .ds_to_es_valid (ds_to_es_valid ), .ds_to_es_bus (ds_to_es_bus ), //to ms .es_to_ms_valid (es_to_ms_valid ), .es_to_ms_bus (es_to_ms_bus ), //data sram interface .data_sram_en (data_sram_en ), .data_sram_we (data_sram_we ), .data_sram_addr (data_sram_addr ), .data_sram_wdata(data_sram_wdata), //Hazard detection signals .es_to_ds_dest (es_to_ds_dest ), .es_to_ds_load_op(es_to_ds_load_op) ); // MEM stage mem_stage mem_stage( .clk (clk ), .reset (reset ), //allowin .ws_allowin (ws_allowin ), .ms_allowin (ms_allowin ), //from es .es_to_ms_valid (es_to_ms_valid ), .es_to_ms_bus (es_to_ms_bus ), //to ws .ms_to_ws_valid (ms_to_ws_valid ), .ms_to_ws_bus (ms_to_ws_bus ), //from data-sram .data_sram_rdata(data_sram_rdata), //Hazard detection signals .ms_to_ds_dest (ms_to_ds_dest ) ); // WB stage wb_stage wb_stage( .clk (clk ), .reset (reset ), //allowin .ws_allowin (ws_allowin ), //from ms .ms_to_ws_valid (ms_to_ws_valid ), .ms_to_ws_bus (ms_to_ws_bus ), //to rf: for write back .ws_to_rf_bus (ws_to_rf_bus ), //trace debug interface .debug_wb_pc (debug_wb_pc ), .debug_wb_rf_we (debug_wb_rf_we ), .debug_wb_rf_wnum (debug_wb_rf_wnum ), .debug_wb_rf_wdata(debug_wb_rf_wdata), //Hazard detection signals .ws_to_ds_dest (ws_to_ds_dest ) ); endmodule ################################################## include "mycpu.h" module wb_stage( input wire clk, input wire reset, //allowin output wire ws_allowin, //from ms input wire ms_to_ws_valid, input wire [MS_TO_WS_BUS_WD -1:0] ms_to_ws_bus, //to rf: for write back output wire [WS_TO_RF_BUS_WD -1:0] ws_to_rf_bus, //trace debug interface output wire [31:0] debug_wb_pc , output wire [ 3:0] debug_wb_rf_we , output wire [ 4:0] debug_wb_rf_wnum, output wire [31:0] debug_wb_rf_wdata, output wire [ 4:0] ws_to_ds_dest ); reg ws_valid; wire ws_ready_go; reg [MS_TO_WS_BUS_WD -1:0] ms_to_ws_bus_r; wire ws_gr_we; wire [ 4:0] ws_dest; wire [31:0] ws_final_result; wire [31:0] ws_pc; assign {ws_gr_we , //69:69 ws_dest , //68:64 ws_final_result, //63:32 ws_pc //31:0 } = ms_to_ws_bus_r; wire rf_we; wire [4 :0] rf_waddr; wire [31:0] rf_wdata; assign ws_to_rf_bus = {rf_we , //37:37 rf_waddr, //36:32 rf_wdata //31:0 }; assign ws_ready_go = 1'b1; assign ws_allowin = !ws_valid || ws_ready_go; always @(posedge clk) begin if (reset) begin ws_valid <= 1'b0; end else if (ws_allowin) begin ws_valid <= ms_to_ws_valid; end if (ms_to_ws_valid && ws_allowin) begin ms_to_ws_bus_r <= ms_to_ws_bus; end end assign rf_we = ws_gr_we && ws_valid; assign rf_waddr = ws_dest; assign rf_wdata = ws_final_result; // debug info generate assign debug_wb_pc = ws_pc; assign debug_wb_rf_we = {4{rf_we}}; assign debug_wb_rf_wnum = ws_dest; assign debug_wb_rf_wdata = ws_final_result; assign ws_to_ds_dest = ws_dest & {5{ws_valid}}; endmodule这是我的实验的代码,我想要在我这个代码的基础上进行修改,实现前递的功能,我用的是loongarch32指令架构,然后我要实现的指令有add.w,addi.w,sub.w,ld.w,st.w,beq,bne,b,bl,jirl,slt,sltu,slli.w,srli.w,srai.w,lu12i.w,and,nor,or,xor这些,在我让你写代码之前你不要生成任何代码,首先告诉我你想要怎么做

小车自主控制从跑道起点处开始寻迹运动至跑道尽头; 2. 小车运动至跑道尽头后,在赛道内自主规划路径,避开障碍物到达起始点停车。#! /usr/bin/env python import rospy import cv2 import numpy as np from cv_bridge import CvBridge, CvBridgeError from geometry_msgs.msg import Twist from sensor_msgs.msg import Image class OmoveFollowing(): def __init__(self): rospy.init_node('omove_following') self.cv_bridge = CvBridge() self.cmd_pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10) rospy.Subscriber('/camera/color/image_raw', Image, self.color_img_cb) self.lane_img_pub = rospy.Publisher('/lane_image', Image, queue_size=5) rospy.Subscriber('/camera/depth/image_rect_raw', Image, self.depth_img_cb) self.yaw = 0.0 self.flag=0 self.blocked = False while not rospy.is_shutdown(): vel_cmd = Twist() if self.blocked: for i in range(20): vel_cmd.linear.y = 0.1 # vel_cmd.linear.y = 0.08 self.cmd_pub.publish(vel_cmd) rospy.sleep(0.1) for i in range(30): vel_cmd.linear.x = -0.1 # vel_cmd.linear.x = -0.08 self.cmd_pub.publish(vel_cmd) rospy.sleep(0.1) for i in range(20): vel_cmd.linear.y = -0.1 # vel_cmd.linear.y = -0.08 self.cmd_pub.publish(vel_cmd) rospy.sleep(0.1) self.blocked = False else: vel_cmd.linear.x = 0.7 # vel_cmd.linear.x = 0.08 # vel_cmd.angular.z = -self.yaw * 0.8 vel_cmd.angular.z = -self.yaw * 0.4 if self.flag: #rospy.sleep(1) rospy.loginfo("yaw is unchange!") vel_cmd.linear.x = 0.0 vel_cmd.angular.z = 0.0 rospy.signal_shutdown('no') #break; self.cmd_pub.publish(vel_cmd) rospy.sleep(0.05) def color_img_cb(self, data): try: cv_img = self.cv_bridge.imgmsg_to_cv2(data, 'bgr8') except CvBridgeError as e: print (e) height, width, channels = cv_img.shape crop_img = cv_img[(height)/2+100:height][1:width] lower = np.array([10,20,10], dtype = "uint8") upper = np.array([40 ,50, 40], dtype = "uint8") mask = cv2.inRange(crop_img, lower, upper) extraction = cv2.bitwise_and(crop_img, crop_img, mask = mask) m = cv2.moments(mask, False) try: x, y = m['m10']/m['m00'], m['m01']/m['m00'] except ZeroDivisionError: x, y = height/2, width/2 self.yaw = (x - width/2)/320.0 cv2.circle(extraction, (int(x), int(y)), 2, (0,255,0), 3) gray= cv2.cvtColor(extraction,cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray,50,150,apertureSize=3) lines = cv2.HoughLinesP(edges,1,np.pi/180,threshold=100,minLineLength=100,maxLineGap=10) self.flag=0 if lines is not None: for line in lines: x1,y1,x2,y2 =line[0] slope=abs((y2-y1)/(x2-x1+1e-6)) if abs(slope)<0.1: self.flag=1 cv2.line(extraction,(x1,y1),(x2,y2),(0,0,255),2) if not rospy.is_shutdown(): self.lane_img_pub.publish(self.cv_bridge.cv2_to_imgmsg(np.hstack([crop_img, extraction]), "bgr8")) cv2.imshow("Image window", np.hstack([crop_img, extraction])) cv2.imshow('test', cv_img) cv2.waitKey(1) def depth_img_cb(self, data): try: depth_img = self.cv_bridge.imgmsg_to_cv2(data, '16UC1') except CvBridgeError as e: print (e) height, width = depth_img.shape for dis in (depth_img[400:401, 0:width])[0]: if 300 < dis < 350: self.blocked = False break if __name__ == '__main__': OmoveFollowing() rospy.spin() cv2.destroyAllWindows()这个是我寻迹的代码,在小车寻迹到寻迹终点后,以寻迹的起点作为目标点,自主开启路径规划导航到寻迹开始的起始点,这个自主导航是使用rviz发布命令,给出Python代码

# -*- coding: utf-8 -*- import numpy as np import cv2 import time import math from pyzbar import pyzbar from connect_uav import UPUavControl from multiprocessing import Process from multiprocessing.managers import BaseManager cap = cv2.VideoCapture(0) no_slice = 4 center = (240, 227) # 全局速度限制 MAX_SPEED = 250 MAX_TURN_SPEED = 180 MAX_ADJUST_SPEED = 100 class LineFollower(): def __init__(self): self.is_follow = True self.no_line_count = 0 self.last_angle = 0 self.turn_count = 0 # 添加当前速度状态 self.current_speed = {'forward': 0, 'turn': 0, 'lateral': 0} # 二维码相关属性 self.scan_content = "" self.adjusting_for_landing = False self.qr_detected = False self.qr_adjust_count = 0 self.no_qr_count = 0 self.last_qr_position = None # 记录上一次二维码位置 self.last_qr_size = 0 # 记录上一次二维码大小 # 简化控制参数 self.min_qr_size = 100 # 二维码最小尺寸(像素)用于降落判断 BaseManager.register('UPUavControl', UPUavControl) manager = BaseManager() manager.start() self.airplanceApi = manager.UPUavControl() controllerAir = Process(name="controller_air", target=self.api_init) controllerAir.start() time.sleep(1) self.airplanceApi.onekey_takeoff(60) time.sleep(5) # 起飞后向前飞行3秒 - 提高速度 print("起飞完成,开始向前飞行...") self.set_speed('forward', 200) # 提高初始飞行速度 time.sleep(3) # 飞行3秒 # 停止向前移动 self.set_speed('forward', 0) print("向前飞行完成,开始巡线任务") self.start_video() super(LineFollower, self).__init__() def set_speed(self, direction, speed): """平滑速度过渡并限制最大速度""" # 限制最大速度 speed = min(speed, MAX_SPEED) # 平滑过渡 - 限制加速度 current = self.current_speed[direction] if abs(speed - current) > 40: speed = current + 40 if speed > current else current - 40 # 更新API if direction == 'forward': self.airplanceApi.move_forward(speed) elif direction == 'turn': if speed > 0: self.airplanceApi.turn_left(min(speed, MAX_TURN_SPEED)) else: self.airplanceApi.turn_right(min(-speed, MAX_TURN_SPEED)) elif direction == 'lateral': if speed > 0: self.airplanceApi.move_right(min(speed, MAX_ADJUST_SPEED)) else: self.airplanceApi.move_left(min(-speed, MAX_ADJUST_SPEED)) # 更新状态 self.current_speed[direction] = speed return speed def api_init(self): print("飞行控制进程启动") time.sleep(0.5) self.airplanceApi.setServoPosition(90) def start_video(self): last_frame_time = time.time() while cap.isOpened(): start_time = time.time() ret, frame = cap.read() if not ret: break # 使用ROI减少处理区域 (保留中间区域) height, width = frame.shape[:2] roi_top = height // 4 roi_bottom = height * 3 // 4 roi_left = width // 4 roi_right = width * 3 // 4 roi_frame = frame[roi_top:roi_bottom, roi_left:roi_right] # 缩放处理 frame = cv2.resize(roi_frame, (0, 0), fx=0.75, fy=0.75) # 根据状态调整二维码检测频率 qr_check_interval = 1 if self.adjusting_for_landing else 3 if self.qr_adjust_count % qr_check_interval == 0: self.decode(frame) self.qr_adjust_count += 1 # 如果识别到降落二维码且满足条件,执行降落 if self.scan_content == "landed" and self.adjusting_for_landing: # 检查二维码是否在中心且足够大 if self.is_qr_centered_and_large(): print("二维码在中心且足够大,执行降落") self.airplanceApi.land() time.sleep(3) cap.release() cv2.destroyAllWindows() return # 结束程序 # 处理二维码丢失情况 if not self.qr_detected and self.adjusting_for_landing: self.no_qr_count += 1 if self.no_qr_count > 30: # 连续30帧未检测到二维码 print("二维码丢失,停止调整") self.adjusting_for_landing = False self.no_qr_count = 0 else: self.no_qr_count = 0 # 如果正在调整位置准备降落,跳过黑线识别 if self.adjusting_for_landing: # 显示调整状态 display_frame = frame.copy() status_text = "Adjusting for Landing" cv2.putText(display_frame, status_text, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) cv2.imshow('line_tracking', display_frame) # 动态帧率处理 processing_time = time.time() - start_time delay = max(1, int(30 - processing_time * 1000)) # 保持约30FPS if cv2.waitKey(delay) & 0xFF == ord('q'): break continue # 正常巡线处理 img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) img = self.remove_background(img, True) slices, cont_cent = self.slice_out(img, no_slice) img = self.repack(slices) if all(c[0] == 0 and c[1] == 0 for c in cont_cent): self.no_line_count += 1 print(f"无法识别线条,计数: {self.no_line_count}") if self.no_line_count >= 2: # 更快响应 print("连续多帧识别不到黑线,尝试寻找") search_speed = 150 # 提高搜索速度 if self.no_line_count % 4 == 0: # 优化搜索模式 self.set_speed('turn', search_speed) print("向左寻找") elif self.no_line_count % 4 == 2: self.set_speed('turn', -search_speed) print("向右寻找") else: self.no_line_count = 0 self.line(img, center, cont_cent) # 显示图像 cv2.imshow('line_tracking', img) # 动态帧率处理 processing_time = time.time() - start_time delay = max(1, int(30 - processing_time * 1000)) # 保持约30FPS if cv2.waitKey(delay) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() def is_qr_centered_and_large(self): """检查二维码是否在中心区域且足够大""" if not self.last_qr_position: return False # 获取图像尺寸 height, width = 360, 480 # 假设图像尺寸为480x360 (0.75缩放后) # 中心区域定义 (中心±100像素) center_region_x = (width / 2 - 100, width / 2 + 100) center_region_y = (height / 2 - 100, height / 2 + 100) # 检查二维码中心位置 center_x, center_y = self.last_qr_position in_center = (center_region_x[0] <= center_x <= center_region_x[1] and center_region_y[0] <= center_y <= center_region_y[1]) # 检查二维码大小 large_enough = self.last_qr_size > self.min_qr_size print( f"QR Center: ({center_x:.1f}, {center_y:.1f}), Size: {self.last_qr_size}, In Center: {in_center}, Large Enough: {large_enough}") return in_center and large_enough def get_contour_center(self, contour): m = cv2.moments(contour) if m["m00"] == 0: return [0, 0] x = int(m["m10"] / m["m00"]) y = int(m["m01"] / m["m00"]) return [x, y] def process(self, image): _, thresh = cv2.threshold(image, 100, 255, cv2.THRESH_BINARY_INV) contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) if contours: main_contour = max(contours, key=cv2.contourArea) cv2.drawContours(image, [main_contour], -1, (150, 150, 150), 2) contour_center = self.get_contour_center(main_contour) cv2.circle(image, tuple(contour_center), 2, (150, 150, 150), 2) else: return image, (0, 0) return image, contour_center def slice_out(self, im, num): cont_cent = [] height, width = im.shape[:2] sl = int(height / num) sliced_imgs = [] for i in range(num): part = sl * i crop_img = im[part:part + sl, 0:width] processed = self.process(crop_img) sliced_imgs.append(processed[0]) cont_cent.append(processed[1]) return sliced_imgs, cont_cent def remove_background(self, image, b): if b: lower = np.array([0], dtype="uint8") upper = np.array([80], dtype="uint8") mask = cv2.inRange(image, lower, upper) image = cv2.bitwise_and(image, image, mask=mask) image = cv2.bitwise_not(image, image, mask=mask) image = (255 - image) return image def repack(self, images): im = images[0] for i in range(1, len(images)): im = np.concatenate((im, images[i]), axis=0) return im def calculate_path_angle(self, cont_cent): """计算路径弯曲角度""" valid_points = [c for c in cont_cent if c != (0, 0)] if len(valid_points) < 2: return self.last_angle top_point = valid_points[0] bottom_point = valid_points[-1] dx = bottom_point[0] - top_point[0] dy = (len(valid_points) - 1) * 65 if dy == 0: return self.last_angle angle_rad = math.atan2(dx, dy) angle_deg = math.degrees(angle_rad) self.last_angle = angle_deg return angle_deg def line(self, image, center, cont_cent): cv2.line(image, (0, 65), (480, 65), (30, 30, 30), 1) cv2.line(image, (0, 130), (480, 130), (30, 30, 30), 1) cv2.line(image, (0, 195), (480, 195), (30, 30, 30), 1) cv2.line(image, (240, 0), (240, 260), (30, 30, 30), 1) prev_point = None for i, point in enumerate(cont_cent): if point != (0, 0): y_pos = point[1] + 65 * i cv2.circle(image, (point[0], y_pos), 3, (0, 0, 255), -1) if prev_point is not None: cv2.line(image, prev_point, (point[0], y_pos), (0, 255, 0), 2) prev_point = (point[0], y_pos) path_angle = self.calculate_path_angle(cont_cent) weights = [0.1, 0.2, 0.3, 0.4] valid = [(c[0], w) for c, w in zip(cont_cent, weights) if c != (0, 0)] if not valid: print("未检测到有效中心点") return offset_center = sum((x - 240) * w for x, w in valid) print(f"cont_cent: {cont_cent}") print(f"offset_center: {offset_center:.2f}") print(f"Path angle: {path_angle:.2f} degrees") if self.is_follow: # 角度控制优化 if abs(path_angle) > 15: self.turn_count += 1 if self.turn_count > 2: # 减少等待帧数 # 动态速度 - 角度越大转得越快 turn_speed = min(MAX_TURN_SPEED, 100 + abs(int(path_angle))) actual_speed = self.set_speed('turn', turn_speed if path_angle > 0 else -turn_speed) print(f"转向 {'左' if path_angle > 0 else '右'}, 速度: {abs(actual_speed)}") else: self.set_speed('forward', 150) print("准备转向中...") else: self.turn_count = 0 # 偏移控制优化 - 动态阈值 offset_threshold = max(60, 120 - abs(path_angle) * 2) if abs(offset_center) > offset_threshold: # 动态速度 - 偏移越大移动越快 move_speed = min(MAX_ADJUST_SPEED, 120 + int(abs(offset_center) / 2)) actual_speed = self.set_speed('lateral', move_speed if offset_center > 0 else -move_speed) print(f"{'右移' if offset_center > 0 else '左移'}, 速度: {abs(actual_speed)}") else: # 提高直行速度 actual_speed = self.set_speed('forward', 220) print(f"前进, 速度: {actual_speed}") # 优化二维码识别方法 def decode(self, image): """识别图像中的二维码""" # 重置本次检测状态 self.qr_detected = False # 图像预处理 - 提高识别率 gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # 增强对比度 gray = cv2.convertScaleAbs(gray, alpha=1.5, beta=20) # 中值滤波去噪 gray = cv2.medianBlur(gray, 3) # 尝试识别二维码 barcodes = pyzbar.decode(gray) for barcode in barcodes: (x, y, w, h) = barcode.rect # 过滤太小的二维码(可能是误识别) if w < 30 or h < 30: continue cv2.rectangle(image, (x, y), (x + w, y + h), (255, 0, 0), 2) try: barcodeData = barcode.data.decode("utf-8") except: continue text = "{}".format(barcodeData) cv2.putText(image, text, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255, 0, 0), 2) center_x = x + w / 2 center_y = y + h / 2 cv2.circle(image, (int(center_x), int(center_y)), 5, (0, 255, 0), -1) self.qr_detected = True self.last_qr_position = (center_x, center_y) self.last_qr_size = max(w, h) # 记录二维码大小 # 只处理内容为"landed"的二维码 if barcodeData == "landed": self.scan_content = "landed" if not self.adjusting_for_landing: print(f"识别到降落二维码: {text}") self.adjusting_for_landing = True # 简化调整逻辑:只关注二维码位置和大小 # 获取图像尺寸 height, width = image.shape[:2] # 中心区域定义 (中心±100像素) center_region_x = (width / 2 - 100, width / 2 + 100) center_region_y = (height / 2 - 100, height / 2 + 100) # 检查二维码中心位置 in_center = (center_region_x[0] <= center_x <= center_region_x[1] and center_region_y[0] <= center_y <= center_region_y[1]) # 调整无人机位置 if not in_center: # 水平调整 - 提高速度并添加比例控制 h_offset = abs(center_x - width/2) h_speed = min(MAX_ADJUST_SPEED, 40 + int(h_offset / 4)) if center_x < width / 2: self.airplanceApi.move_left(h_speed) print(f"向左移动, 速度: {h_speed}") else: self.airplanceApi.move_right(h_speed) print(f"向右移动, 速度: {h_speed}") # 垂直调整 - 提高速度 v_speed = 40 if center_y < height / 2: self.airplanceApi.move_up(v_speed) print(f"向上移动, 速度: {v_speed}") else: self.airplanceApi.move_down(v_speed) print(f"向下移动, 速度: {v_speed}") else: # 当在中心区域时,停止水平移动 self.airplanceApi.move_left(0) self.airplanceApi.move_right(0) # 如果二维码不够大,下降 if self.last_qr_size < self.min_qr_size: self.airplanceApi.move_down(40) print(f"向下移动靠近二维码, 速度: 40") else: self.airplanceApi.move_down(0) print("二维码大小合适") # 强制降落条件:当二维码在中心且足够大时 if in_center and self.last_qr_size > self.min_qr_size: print("二维码在中心且足够大,准备降落") break # 一次只处理一个二维码 if __name__ == '__main__': line_follower = LineFollower() 帮我看看这个代码怎么样

import cv2 import numpy as np from numpy.linalg import norm import sys import os import json SZ = 20 #训练图片长宽 MAX_WIDTH = 1000 #原始图片最大宽度 Min_Area = 2000 #车牌区域允许最大面积 PROVINCE_START = 1000 #读取图片文件 def imreadex(filename): return cv2.imdecode(np.fromfile(filename, dtype=np.uint8), cv2.IMREAD_COLOR) def point_limit(point): if point[0] < 0: point[0] = 0 if point[1] < 0: point[1] = 0 #根据设定的阈值和图片直方图,找出波峰,用于分隔字符 def find_waves(threshold, histogram): up_point = -1#上升点 is_peak = False if histogram[0] > threshold: up_point = 0 is_peak = True wave_peaks = [] for i,x in enumerate(histogram): if is_peak and x < threshold: if i - up_point > 2: is_peak = False wave_peaks.append((up_point, i)) elif not is_peak and x >= threshold: is_peak = True up_point = i if is_peak and up_point != -1 and i - up_point > 4: wave_peaks.append((up_point, i)) return wave_peaks #根据找出的波峰,分隔图片,从而得到逐个字符图片 def seperate_card(img, waves): part_cards = [] for wave in waves: part_cards.append(img[:, wave[0]:wave[1]]) return part_cards #来自opencv的sample,用于svm训练 def deskew(img): m = cv2.moments(img) if abs(m['mu02']) < 1e-2: return img.copy() skew = m['mu11']/m['mu02'] M = np.float32([[1, skew, -0.5*SZ*skew], [0, 1, 0]]) img = cv2.warpAffine(img, M, (SZ, SZ), flags=cv2.WARP_INVERSE_MAP | cv2.INTER_LINEAR) return img #来自opencv的sample,用于svm训练 def preprocess_hog(digits): samples = [] for img in digits: gx = cv2.Sobel(img, cv2.CV_32F, 1, 0) gy = cv2.Sobel(img, cv2.CV_32F, 0, 1) mag, ang = cv2.cartToPolar(gx, gy) bin_n = 16 bin = np.int32(bin_n*ang/(2*np.pi)) bin_cells = bin[:10,:10], bin[10:,:10], bin[:10,10:], bin[10:,10:] mag_cells = mag[:10,:10], mag[10:,:10], mag[:10,10:], mag[10:,10:] hists = [np.bincount(b.ravel(), m.ravel(), bin_n) for b, m in zip(bin_cells, mag_cells)] hist = np.hstack(hists) # transform to Hellinger kernel eps = 1e-7 hist /= hist.sum() + eps hist = np.sqrt(hist) hist /= norm(hist) + eps samples.append(hist) return np.float32(samples) #不能保证包括所有省份 provinces = [ "zh_cuan", "川", "zh_e", "鄂", "zh_gan", "赣", "zh_gan1", "甘", "zh_gui", "贵", "zh_gui1", "桂", "zh_hei", "黑", "zh_hu", "沪", "zh_ji", "冀", "zh_jin", "津", "zh_jing", "京", "zh_jl", "吉", "zh_liao", "辽", "zh_lu", "鲁", "zh_meng", "蒙", "zh_min", "闽", "zh_ning", "宁", "zh_qing", "靑", "zh_qiong", "琼", "zh_shan", "陕", "zh_su", "苏", "zh_sx", "晋", "zh_wan", "皖", "zh_xiang", "湘", "zh_xin", "新", "zh_yu", "豫", "zh_yu1", "渝", "zh_yue", "粤", "zh_yun", "云", "zh_zang", "藏", "zh_zhe", "浙" ] class StatModel(object): def load(self, fn): self.model = self.model.load(fn) def save(self, fn): self.model.save(fn) class SVM(StatModel): def __init__(self, C = 1, gamma = 0.5): self.model = cv2.ml.SVM_create() self.model.setGamma(gamma) self.model.setC(C) self.model.setKernel(cv2.ml.SVM_RBF) self.model.setType(cv2.ml.SVM_C_SVC) #训练svm def train(self, samples, responses): self.model.train(samples, cv2.ml.ROW_SAMPLE, responses) #字符识别 def predict(self, samples): r = self.model.predict(samples) return r[1].ravel() class CardPredictor: def __init__(self): #车牌识别的部分参数保存在js中,便于根据图片分辨率做调整 f = open('config.js') j = json.load(f) for c in j["config"]: if c["open"]: self.cfg = c.copy() break else: raise RuntimeError('没有设置有效配置参数') def __del__(self): self.save_traindata() def train_svm(self): #识别英文字母和数字 self.model = SVM(C=1, gamma=0.5) #识别中文 self.modelchinese = SVM(C=1, gamma=0.5) if os.path.exists("svm.dat"): self.model.load("svm.dat") else: chars_train = [] chars_label = [] for root, dirs, files in os.walk("train\\chars2"): if len(os.path.basename(root)) > 1: continue root_int = ord(os.path.basename(root)) for filename in files: filepath = os.path.join(root,filename) digit_img = cv2.imread(filepath) digit_img = cv2.cvtColor(digit_img, cv2.COLOR_BGR2GRAY) chars_train.append(digit_img) #chars_label.append(1) chars_label.append(root_int) chars_train = list(map(deskew, chars_train)) chars_train = preprocess_hog(chars_train) #chars_train = chars_train.reshape(-1, 20, 20).astype(np.float32) chars_label = np.array(chars_label) self.model.train(chars_train, chars_label) if os.path.exists("svmchinese.dat"): self.modelchinese.load("svmchinese.dat") else: chars_train = [] chars_label = [] for root, dirs, files in os.walk("train\\charsChinese"): if not os.path.basename(root).startswith("zh_"): continue pinyin = os.path.basename(root) index = provinces.index(pinyin) + PROVINCE_START + 1 #1是拼音对应的汉字 for filename in files: filepath = os.path.join(root,filename) digit_img = cv2.imread(filepath) digit_img = cv2.cvtColor(digit_img, cv2.COLOR_BGR2GRAY) chars_train.append(digit_img) #chars_label.append(1) chars_label.append(index) chars_train = list(map(deskew, chars_train)) chars_train = preprocess_hog(chars_train) #chars_train = chars_train.reshape(-1, 20, 20).astype(np.float32) chars_label = np.array(chars_label) print(chars_train.shape) self.modelchinese.train(chars_train, chars_label) def save_traindata(self): if not os.path.exists("svm.dat"): self.model.save("svm.dat") if not os.path.exists("svmchinese.dat"): self.modelchinese.save("svmchinese.dat") def accurate_place(self, card_img_hsv, limit1, limit2, color): row_num, col_num = card_img_hsv.shape[:2] xl = col_num xr = 0 yh = 0 yl = row_num #col_num_limit = self.cfg["col_num_limit"] row_num_limit = self.cfg["row_num_limit"] col_num_limit = col_num * 0.8 if color != "green" else col_num * 0.5#绿色有渐变 for i in range(row_num): count = 0 for j in range(col_num): H = card_img_hsv.item(i, j, 0) S = card_img_hsv.item(i, j, 1) V = card_img_hsv.item(i, j, 2) if limit1 < H <= limit2 and 34 < S and 46 < V: count += 1 if count > col_num_limit: if yl > i: yl = i if yh < i: yh = i for j in range(col_num): count = 0 for i in range(row_num): H = card_img_hsv.item(i, j, 0) S = card_img_hsv.item(i, j, 1) V = card_img_hsv.item(i, j, 2) if limit1 < H <= limit2 and 34 < S and 46 < V: count += 1 if count > row_num - row_num_limit: if xl > j: xl = j if xr < j: xr = j return xl, xr, yh, yl def predict(self, car_pic, resize_rate=1): if type(car_pic) == type(""): img = imreadex(car_pic) else: img = car_pic pic_hight, pic_width = img.shape[:2] if pic_width > MAX_WIDTH: pic_rate = MAX_WIDTH / pic_width img = cv2.resize(img, (MAX_WIDTH, int(pic_hight*pic_rate)), interpolation=cv2.INTER_LANCZOS4) if resize_rate != 1: img = cv2.resize(img, (int(pic_width*resize_rate), int(pic_hight*resize_rate)), interpolation=cv2.INTER_LANCZOS4) pic_hight, pic_width = img.shape[:2] print("h,w:", pic_hight, pic_width) blur = self.cfg["blur"] #高斯去噪 if blur > 0: img = cv2.GaussianBlur(img, (blur, blur), 0)#图片分辨率调整 oldimg = img img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #equ = cv2.equalizeHist(img) #img = np.hstack((img, equ)) #去掉图像中不会是车牌的区域 kernel = np.ones((20, 20), np.uint8) img_opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel) img_opening = cv2.addWeighted(img, 1, img_opening, -1, 0); #找到图像边缘 ret, img_thresh = cv2.threshold(img_opening, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) img_edge = cv2.Canny(img_thresh, 100, 200) #使用开运算和闭运算让图像边缘成为一个整体 kernel = np.ones((self.cfg["morphologyr"], self.cfg["morphologyc"]), np.uint8) img_edge1 = cv2.morphologyEx(img_edge, cv2.MORPH_CLOSE, kernel) img_edge2 = cv2.morphologyEx(img_edge1, cv2.MORPH_OPEN, kernel) #查找图像边缘整体形成的矩形区域,可能有很多,车牌就在其中一个矩形区域中 try: contours, hierarchy = cv2.findContours(img_edge2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) except ValueError: image, contours, hierarchy = cv2.findContours(img_edge2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) contours = [cnt for cnt in contours if cv2.contourArea(cnt) > Min_Area] print('len(contours)', len(contours)) #一一排除不是车牌的矩形区域 car_contours = [] for cnt in contours: rect = cv2.minAreaRect(cnt) area_width, area_height = rect[1] if area_width < area_height: area_width, area_height = area_height, area_width wh_ratio = area_width / area_height #print(wh_ratio) #要求矩形区域长宽比在2到5.5之间,2到5.5是车牌的长宽比,其余的矩形排除 if wh_ratio > 2 and wh_ratio < 5.5: car_contours.append(rect) box = cv2.boxPoints(rect) box = np.intp(box) #oldimg = cv2.drawContours(oldimg, [box], 0, (0, 0, 255), 2) #cv2.imshow("edge4", oldimg) #cv2.waitKey(0) print(len(car_contours)) print("精确定位") card_imgs = [] #矩形区域可能是倾斜的矩形,需要矫正,以便使用颜色定位 for rect in car_contours: if rect[2] > -1 and rect[2] < 1:#创造角度,使得左、高、右、低拿到正确的值 angle = 1 else: angle = rect[2] rect = (rect[0], (rect[1][0]+5, rect[1][1]+5), angle)#扩大范围,避免车牌边缘被排除 box = cv2.boxPoints(rect) heigth_point = right_point = [0, 0] left_point = low_point = [pic_width, pic_hight] for point in box: if left_point[0] > point[0]: left_point = point if low_point[1] > point[1]: low_point = point if heigth_point[1] < point[1]: heigth_point = point if right_point[0] < point[0]: right_point = point if left_point[1] <= right_point[1]:#正角度 new_right_point = [right_point[0], heigth_point[1]] pts2 = np.float32([left_point, heigth_point, new_right_point])#字符只是高度需要改变 pts1 = np.float32([left_point, heigth_point, right_point]) M = cv2.getAffineTransform(pts1, pts2) dst = cv2.warpAffine(oldimg, M, (pic_width, pic_hight)) point_limit(new_right_point) point_limit(heigth_point) point_limit(left_point) card_img = dst[int(left_point[1]):int(heigth_point[1]), int(left_point[0]):int(new_right_point[0])] card_imgs.append(card_img) #cv2.imshow("card", card_img) #cv2.waitKey(0) elif left_point[1] > right_point[1]:#负角度 new_left_point = [left_point[0], heigth_point[1]] pts2 = np.float32([new_left_point, heigth_point, right_point])#字符只是高度需要改变 pts1 = np.float32([left_point, heigth_point, right_point]) M = cv2.getAffineTransform(pts1, pts2) dst = cv2.warpAffine(oldimg, M, (pic_width, pic_hight)) point_limit(right_point) point_limit(heigth_point) point_limit(new_left_point) card_img = dst[int(right_point[1]):int(heigth_point[1]), int(new_left_point[0]):int(right_point[0])] card_imgs.append(card_img) #cv2.imshow("card", card_img) #cv2.waitKey(0) #开始使用颜色定位,排除不是车牌的矩形,目前只识别蓝、绿、黄车牌 colors = [] for card_index,card_img in enumerate(card_imgs): green = yello = blue = black = white = 0 card_img_hsv = cv2.cvtColor(card_img, cv2.COLOR_BGR2HSV) #有转换失败的可能,原因来自于上面矫正矩形出错 if card_img_hsv is None: continue row_num, col_num= card_img_hsv.shape[:2] card_img_count = row_num * col_num for i in range(row_num): for j in range(col_num): H = card_img_hsv.item(i, j, 0) S = card_img_hsv.item(i, j, 1) V = card_img_hsv.item(i, j, 2) if 11 < H <= 34 and S > 34:#图片分辨率调整 yello += 1 elif 35 < H <= 99 and S > 34:#图片分辨率调整 green += 1 elif 99 < H <= 124 and S > 34:#图片分辨率调整 blue += 1 if 0 < H <180 and 0 < S < 255 and 0 < V < 46: black += 1 elif 0 < H <180 and 0 < S < 43 and 221 < V < 225: white += 1 color = "no" limit1 = limit2 = 0 if yello*2 >= card_img_count: color = "yello" limit1 = 11 limit2 = 34#有的图片有色偏偏绿 elif green*2 >= card_img_count: color = "green" limit1 = 35 limit2 = 99 elif blue*2 >= card_img_count: color = "blue" limit1 = 100 limit2 = 124#有的图片有色偏偏紫 elif black + white >= card_img_count*0.7:#TODO color = "bw" print(color) colors.append(color) print(blue, green, yello, black, white, card_img_count) #cv2.imshow("color", card_img) #cv2.waitKey(0) if limit1 == 0: continue #以上为确定车牌颜色 #以下为根据车牌颜色再定位,缩小边缘非车牌边界 xl, xr, yh, yl = self.accurate_place(card_img_hsv, limit1, limit2, color) if yl == yh and xl == xr: continue need_accurate = False if yl >= yh: yl = 0 yh = row_num need_accurate = True if xl >= xr: xl = 0 xr = col_num need_accurate = True card_imgs[card_index] = card_img[yl:yh, xl:xr] if color != "green" or yl < (yh-yl)//4 else card_img[yl-(yh-yl)//4:yh, xl:xr] if need_accurate:#可能x或y方向未缩小,需要再试一次 card_img = card_imgs[card_index] card_img_hsv = cv2.cvtColor(card_img, cv2.COLOR_BGR2HSV) xl, xr, yh, yl = self.accurate_place(card_img_hsv, limit1, limit2, color) if yl == yh and xl == xr: continue if yl >= yh: yl = 0 yh = row_num if xl >= xr: xl = 0 xr = col_num card_imgs[card_index] = card_img[yl:yh, xl:xr] if color != "green" or yl < (yh-yl)//4 else card_img[yl-(yh-yl)//4:yh, xl:xr] #以上为车牌定位 #以下为识别车牌中的字符 predict_result = [] roi = None card_color = None for i, color in enumerate(colors): if color in ("blue", "yello", "green"): card_img = card_imgs[i] gray_img = cv2.cvtColor(card_img, cv2.COLOR_BGR2GRAY) #黄、绿车牌字符比背景暗、与蓝车牌刚好相反,所以黄、绿车牌需要反向 if color == "green" or color == "yello": gray_img = cv2.bitwise_not(gray_img) ret, gray_img = cv2.threshold(gray_img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) #查找水平直方图波峰 x_histogram = np.sum(gray_img, axis=1) x_min = np.min(x_histogram) x_average = np.sum(x_histogram)/x_histogram.shape[0] x_threshold = (x_min + x_average)/2 wave_peaks = find_waves(x_threshold, x_histogram) if len(wave_peaks) == 0: print("peak less 0:") continue #认为水平方向,最大的波峰为车牌区域 wave = max(wave_peaks, key=lambda x:x[1]-x[0]) gray_img = gray_img[wave[0]:wave[1]] #查找垂直直方图波峰 row_num, col_num= gray_img.shape[:2] #去掉车牌上下边缘1个像素,避免白边影响阈值判断 gray_img = gray_img[1:row_num-1] y_histogram = np.sum(gray_img, axis=0) y_min = np.min(y_histogram) y_average = np.sum(y_histogram)/y_histogram.shape[0] y_threshold = (y_min + y_average)/5#U和0要求阈值偏小,否则U和0会被分成两半 wave_peaks = find_waves(y_threshold, y_histogram) #for wave in wave_peaks: # cv2.line(card_img, pt1=(wave[0], 5), pt2=(wave[1], 5), color=(0, 0, 255), thickness=2) #车牌字符数应大于6 if len(wave_peaks) <= 6: print("peak less 1:", len(wave_peaks)) continue wave = max(wave_peaks, key=lambda x:x[1]-x[0]) max_wave_dis = wave[1] - wave[0] #判断是否是左侧车牌边缘 if wave_peaks[0][1] - wave_peaks[0][0] < max_wave_dis/3 and wave_peaks[0][0] == 0: wave_peaks.pop(0) #组合分离汉字 cur_dis = 0 for i,wave in enumerate(wave_peaks): if wave[1] - wave[0] + cur_dis > max_wave_dis * 0.6: break else: cur_dis += wave[1] - wave[0] if i > 0: wave = (wave_peaks[0][0], wave_peaks[i][1]) wave_peaks = wave_peaks[i+1:] wave_peaks.insert(0, wave) #去除车牌上的分隔点 point = wave_peaks[2] if point[1] - point[0] < max_wave_dis/3: point_img = gray_img[:,point[0]:point[1]] if np.mean(point_img) < 255/5: wave_peaks.pop(2) if len(wave_peaks) <= 6: print("peak less 2:", len(wave_peaks)) continue part_cards = seperate_card(gray_img, wave_peaks) for i, part_card in enumerate(part_cards): #可能是固定车牌的铆钉 if np.mean(part_card) < 255/5: print("a point") continue part_card_old = part_card #w = abs(part_card.shape[1] - SZ)//2 w = part_card.shape[1] // 3 part_card = cv2.copyMakeBorder(part_card, 0, 0, w, w, cv2.BORDER_CONSTANT, value = [0,0,0]) part_card = cv2.resize(part_card, (SZ, SZ), interpolation=cv2.INTER_AREA) #cv2.imshow("part", part_card_old) #cv2.waitKey(0) #cv2.imwrite("u.jpg", part_card) #part_card = deskew(part_card) part_card = preprocess_hog([part_card]) if i == 0: resp = self.modelchinese.predict(part_card) charactor = provinces[int(resp[0]) - PROVINCE_START] else: resp = self.model.predict(part_card) # charactor = chr(resp[0]) charactor = chr(int(resp[0])) # 显式转换为int #判断最后一个数是否是车牌边缘,假设车牌边缘被认为是1 if charactor == "1" and i == len(part_cards)-1: if part_card_old.shape[0]/part_card_old.shape[1] >= 8:#1太细,认为是边缘 print(part_card_old.shape) continue predict_result.append(charactor) roi = card_img card_color = color break return predict_result, roi, card_color#识别到的字符、定位的车牌图像、车牌颜色 #实现单图多车牌识别 def separate_card(self, gray_img, wave_peaks): """字符分割函数""" part_cards = [] for wave in wave_peaks: part_card = gray_img[:, wave[0]:wave[1]] if part_card.shape[1] > 0: part_cards.append(part_card) return part_cards def predict_multi(self, car_pic, resize_rate=1): if type(car_pic) == type(""): img = imreadex(car_pic) else: img = car_pic pic_hight, pic_width = img.shape[:2] if pic_width > MAX_WIDTH: pic_rate = MAX_WIDTH / pic_width img = cv2.resize(img, (MAX_WIDTH, int(pic_hight * pic_rate)), interpolation=cv2.INTER_LANCZOS4) if resize_rate != 1: img = cv2.resize(img, (int(pic_width * resize_rate), int(pic_hight * resize_rate)), interpolation=cv2.INTER_LANCZOS4) # 初始化参数 blur = self.cfg["blur"] if blur > 0: img = cv2.GaussianBlur(img, (blur, blur), 0) img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) kernel = np.ones((20, 20), np.uint8) img_opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel) img_opening = cv2.addWeighted(img, 1, img_opening, -1, 0) ret, img_thresh = cv2.threshold(img_opening, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) img_edge = cv2.Canny(img_thresh, 100, 200) kernel = np.ones((self.cfg["morphologyr"], self.cfg["morphologyc"]), np.uint8) img_edge1 = cv2.morphologyEx(img_edge, cv2.MORPH_CLOSE, kernel) img_edge2 = cv2.morphologyEx(img_edge1, cv2.MORPH_OPEN, kernel) contours, hierarchy = cv2.findContours(img_edge2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) contours = [cnt for cnt in contours if cv2.contourArea(cnt) > Min_Area] car_contours = [] for cnt in contours: rect = cv2.minAreaRect(cnt) area_width, area_height = rect[1] if area_width < area_height: area_width, area_height = area_height, area_width wh_ratio = area_width / area_height if wh_ratio > 2 and wh_ratio < 5.5: car_contours.append(rect) results = [] for rect in car_contours: # 矫正车牌区域 if rect[2] > -1 and rect[2] < 1: angle = 1 else: angle = rect[2] rect = (rect[0], (rect[1][0] + 5, rect[1][1] + 5), angle) box = cv2.boxPoints(rect) box = np.intp(box) heigth_point = right_point = [0, 0] left_point = low_point = [pic_width, pic_hight] for point in box: if left_point[0] > point[0]: left_point = point if low_point[1] > point[1]: low_point = point if heigth_point[1] < point[1]: heigth_point = point if right_point[0] < point[0]: right_point = point if left_point[1] <= right_point[1]: # 正角度 new_right_point = [right_point[0], heigth_point[1]] pts2 = np.float32([left_point, heigth_point, new_right_point]) pts1 = np.float32([left_point, heigth_point, right_point]) M = cv2.getAffineTransform(pts1, pts2) dst = cv2.warpAffine(img, M, (pic_width, pic_hight)) point_limit(new_right_point) point_limit(heigth_point) point_limit(left_point) card_img = dst[int(left_point[1]):int(heigth_point[1]), int(left_point[0]):int(new_right_point[0])] elif left_point[1] > right_point[1]: # 负角度 new_left_point = [left_point[0], heigth_point[1]] pts2 = np.float32([new_left_point, heigth_point, right_point]) pts1 = np.float32([left_point, heigth_point, right_point]) M = cv2.getAffineTransform(pts1, pts2) dst = cv2.warpAffine(img, M, (pic_width, pic_hight)) point_limit(right_point) point_limit(heigth_point) point_limit(new_left_point) card_img = dst[int(right_point[1]):int(heigth_point[1]), int(new_left_point[0]):int(right_point[0])] # 颜色识别(蓝/绿/黄) if len(card_img.shape) == 2: # 灰度图检测 card_img = cv2.cvtColor(card_img, cv2.COLOR_GRAY2BGR) card_img_hsv = cv2.cvtColor(card_img, cv2.COLOR_BGR2HSV) # 必须的转换步骤 color = self.detect_color(card_img_hsv) # 字符分割与识别 gray_img = cv2.cvtColor(card_img, cv2.COLOR_BGR2GRAY) if color in ["green", "yello"]: gray_img = cv2.bitwise_not(gray_img) ret, gray_img = cv2.threshold(gray_img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # 水平投影分割 horizontal_sum = np.sum(gray_img, axis=1) wave_peaks = self.find_peaks(horizontal_sum) if len(wave_peaks) > 0: gray_img = gray_img[wave_peaks[0][0]:wave_peaks[-1][1], :] # 垂直投影分割 vertical_sum = np.sum(gray_img, axis=0) char_peaks = self.find_peaks(vertical_sum) # 字符识别 plate_chars = [] for i, (start, end) in enumerate(char_peaks): char_img = gray_img[:, start:end] char_img = cv2.resize(char_img, (20, 20)) if i == 0: # 第一个字符为中文 char_img = preprocess_hog([char_img]) resp = self.modelchinese.predict(char_img) plate_chars.append(provinces[int(resp[0]) - PROVINCE_START]) else: # 其他字符为字母/数字 char_img = preprocess_hog([char_img]) resp = self.model.predict(char_img) plate_chars.append(chr(int(resp[0]))) results.append({ "plate": "".join(plate_chars), "color": color, "roi": card_img }) return results def detect_color(self, hsv_img): """识别车牌颜色""" # 颜色阈值定义 blue_lower = np.array([100, 50, 50]) blue_upper = np.array([124, 255, 255]) green_lower = np.array([35, 50, 50]) green_upper = np.array([99, 255, 255]) yellow_lower = np.array([11, 50, 50]) yellow_upper = np.array([34, 255, 255]) # 计算各颜色像素占比 blue_mask = cv2.inRange(hsv_img, blue_lower, blue_upper) green_mask = cv2.inRange(hsv_img, green_lower, green_upper) yellow_mask = cv2.inRange(hsv_img, yellow_lower, yellow_upper) total_pixels = hsv_img.shape[0] * hsv_img.shape[1] blue_ratio = np.count_nonzero(blue_mask) / total_pixels green_ratio = np.count_nonzero(green_mask) / total_pixels yellow_ratio = np.count_nonzero(yellow_mask) / total_pixels # 确定主要颜色 if blue_ratio > 0.3: return "blue" elif green_ratio > 0.3: return "green" elif yellow_ratio > 0.3: return "yello" else: return "unknown" def find_peaks(self, histogram, min_width=2): """寻找直方图波峰""" peaks = [] start = 0 rising = False for i, value in enumerate(histogram): if value > 0 and not rising: start = i rising = True elif value == 0 and rising: if i - start >= min_width: peaks.append((start, i)) rising = False if rising and len(histogram) - start >= min_width: peaks.append((start, len(histogram))) return peaks def process_video(self, source=0, output_path=None, show=True): cap = cv2.VideoCapture(source) if not cap.isOpened(): print("无法打开摄像头") return while True: ret, frame = cap.read() if not ret: print("无法接收帧 (stream end?). Exiting ...") break r, roi, color = self.predict(frame) if roi is not None and show: cv2.imshow("识别的车牌", roi) if cv2.waitKey(1) & 0xFF == 27: # 按ESC键退出 break cap.release() cv2.destroyAllWindows() # if __name__ == '__main__': # c = CardPredictor() # c.train_svm() # r, roi, color = c.predict("10.jpg") # print(r) if __name__ == '__main__': c = CardPredictor() c.train_svm() # 用户交互菜单 while True: print("\n===== 车牌识别系统 =====") print("1. 单图片单车牌识别") print("2. 单图片多车牌识别") print("3. 摄像头实时识别") print("4. 视频文件识别") print("5. 退出") choice = input("请选择功能 (1-5): ") if choice == "1": # 单图片单车牌识别(原功能) img_path = input("请输入图片路径: ") r, roi, color = c.predict(img_path) print(f"识别结果: {''.join(r)}, 车牌颜色: {color}") # 显示识别的车牌 if roi is not None: cv2.imshow("识别的车牌", roi) cv2.waitKey(0) cv2.destroyAllWindows() elif choice == "2": # 单图片多车牌识别 img_path = input("请输入图片路径: ") results = c.predict_multi(img_path) if results: print(f"共识别到 {len(results)} 个车牌:") for i, result in enumerate(results, 1): try: # 安全获取字段并设置默认值 plate_num = ''.join(result.get("plate_number", ["识别失败"])) plate_color = result.get("plate_color", "未知颜色") roi_img = result.get("roi", None) print(f"车牌 {i}: {plate_num}, 颜色: {plate_color}") # 仅当存在ROI图像时显示 if roi_img is not None: cv2.imshow(f"车牌 {i}", roi_img) cv2.waitKey(1) # 实时刷新显示 else: print(f"警告: 车牌 {i} 无ROI图像") except Exception as e: print(f"处理车牌 {i} 时出错: {str(e)}") continue cv2.waitKey(0) cv2.destroyAllWindows() else: print("未识别到车牌") elif choice == "3": # 摄像头实时识别 print("按ESC键退出...") c.process_video(0, show=True) elif choice == "4": # 视频文件识别 video_path = input("请输入视频路径: ") output_path = input("请输入输出路径 (留空则不保存): ") output_path = output_path if output_path.strip() else None print("按ESC键退出...") c.process_video(video_path, output_path) elif choice == "5": print("程序已退出") break else: print("无效选择,请重新输入") 基于这个代码修改,让它可以识别出车牌号和车牌颜色以及有几个车牌

import cv2 import numpy as np import matplotlib.pyplot as plt DEBUG = 1 myimg = cv2.imread('IMG1.png') myimg = cv2.cvtColor(myimg, cv2.COLOR_BGR2GRAY).astype(np.float64) img_GAUS = cv2.GaussianBlur(myimg, (5, 5), 0) # 参数:(输入图像, 卷积核大小, 标准差) if DEBUG == 1: plt.figure() plt.imshow(img_GAUS, cmap='gray') plt.title('img GAUS') # 将图像转换为 8 位无符号整数类型 # img_GAUS_8u = cv2.normalize(img_GAUS, None, 0, 255, cv2.NORM_MINMAX).astype(np.uint8) img_GAUS_8u = img_GAUS.astype(np.uint8) # 应用Otsu阈值法 threshold, binary = cv2.threshold(img_GAUS_8u, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # 反二值化:将黑线变为白色 binary_inv = cv2.bitwise_not(binary) if DEBUG == 1: plt.figure() plt.imshow(binary_inv, cmap='gray') plt.title('binary') # 1. 形态学操作 - 优化线条 kernel = np.ones((5, 5), np.uint8) #根据实际情况调整卷积核大小 morphed = cv2.morphologyEx(binary_inv, cv2.MORPH_OPEN, kernel) # 开运算去除小噪点 morphed = cv2.morphologyEx(morphed, cv2.MORPH_CLOSE, kernel) # 闭运算连接断线 if DEBUG == 1: plt.figure() plt.imshow(morphed, cmap='gray') plt.title('morphed') # ============ 轮廓检测 ============ # 1. 寻找轮廓 contours, hierarchy = cv2.findContours( morphed, cv2.RETR_EXTERNAL, # 只检测最外层轮廓 cv2.CHAIN_APPROX_SIMPLE # 简化轮廓点 ) # 2. 创建可视化图像 # 创建黑色背景图像(与原图相同大小) contour_img = np.zeros_like(myimg) # 转换为三通道以便显示彩色轮廓 contour_img_color = cv2.cvtColor(contour_img, cv2.COLOR_GRAY2BGR) # 3. 绘制轮廓 if len(contours) > 0: # 按面积排序轮廓 contours = sorted(contours, key=cv2.contourArea, reverse=True) # 只绘制面积最大的3个轮廓 for i, contour in enumerate(contours[:3]): # 计算轮廓特征 area = cv2.contourArea(contour) perimeter = cv2.arcLength(contour, True) # 计算最小外接矩形 rect = cv2.minAreaRect(contour) box = cv2.boxPoints(rect) box = np.int0(box) # 计算轮廓重心 M = cv2.moments(contour) if M["m00"] != 0: cX = int(M["m10"] / M["m00"]) cY = int(M["m01"] / M["m00"]) else: cX, cY = 0, 0 # 为不同轮廓分配不同颜色 if i == 0: color = (0, 0, 255) # 红色 - 最大轮廓 elif i == 1: color = (0, 255, 0) # 绿色 - 第二大轮廓 else: color = (255, 0, 0) # 蓝色 - 第三大轮廓 # 绘制轮廓 cv2.drawContours(contour_img_color, [contour], -1, color, 2) # 绘制最小外接矩形 cv2.drawContours(contour_img_color, [box], 0, color, 1) # 绘制重心 cv2.circle(contour_img_color, (cX, cY), 5, color, -1) # 添加轮廓信息 cv2.putText(contour_img_color, f"Area: {area:.0f}", (cX + 10, cY - 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 1) cv2.putText(contour_img_color, f"Perim: {perimeter:.0f}", (cX + 10, cY), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 1) # 添加轮廓计数信息 cv2.putText(contour_img_color, f"Total Contours: {len(contours)}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 255), 2) else: # 没有找到轮廓 cv2.putText(contour_img_color, "No Contours Found", (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2) # 4. 显示轮廓图像 if DEBUG == 1: plt.figure(figsize=(10, 8)) plt.imshow(cv2.cvtColor(contour_img_color, cv2.COLOR_BGR2RGB)) plt.title('轮廓检测结果') # 显示原图与轮廓叠加 original_color = cv2.cvtColor(img_GAUS_8u, cv2.COLOR_GRAY2BGR) overlay = cv2.addWeighted(original_color, 0.7, contour_img_color, 0.3, 0) plt.figure(figsize=(10, 8)) plt.imshow(cv2.cvtColor(overlay, cv2.COLOR_BGR2RGB)) plt.title('原图与轮廓叠加') if DEBUG == 1: plt.show() E:\Anaconda\envs\env_2024NH\python.exe E:\2024NH_problem\findline.py Traceback (most recent call last): File "E:\2024NH_problem\findline.py", line 49, in <module> contour_img_color = cv2.cvtColor(contour_img, cv2.COLOR_GRAY2BGR) cv2.error: OpenCV(4.10.0) C:\b\abs_a3k2mim6t7\croot\opencv-suite_1744821726920\work\modules\imgproc\src\color.simd_helpers.hpp:94: error: (-2:Unspecified error) in function '__cdecl cv::impl::anonymous-namespace'::CvtHelper<struct cv::impl::anonymous namespace'::Set<1,-1,-1>,struct cv::impl::A0x22bd0e3b::Set<3,4,-1>,struct cv::impl::A0x22bd0e3b::Set<0,2,5>,4>::CvtHelper(const class cv::_InputArray &,const class cv::_OutputArray &,int)' > Unsupported depth of input image: > 'VDepth::contains(depth)' > where > 'depth' is 6 (CV_64F) 进程已结束,退出代码为 1

以下代码出现了以下错误: import cv2 import numpy as np import matplotlib.pyplot as plt import os from glob import glob from tqdm import tqdm class KittiLaneMapper: def __init__(self, data_path='C:/Users/12911/Desktop/V-SLAM/Lane line detect/data_road/data_road'): """ 初始化KITTI数据处理类 :param data_path: KITTI数据集路径(包含image_2, calib等文件夹) """ self.data_path = data_path self.image_dir = os.path.join(data_path, 'image_2') self.calib_dir = os.path.join(data_path, 'calib') self.poses_file = os.path.join(data_path, 'poses.txt') # 获取所有图像文件 self.image_files = sorted(glob(os.path.join(self.image_dir, '*.png'))) self.map_points = [] # 存储地图点 def detect_lanes(self, img): """ 车道线检测函数 :param img: 输入BGR图像 :return: 左右车道线点集 """ # 1. 图像预处理 gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (5, 5), 0) edges = cv2.Canny(blur, 50, 150) # 2. 定义ROI区域(道路区域) height, width = img.shape[:2] mask = np.zeros_like(edges) vertices = np.array([[(0, height * 0.65), (width, height * 0.65), (width, height), (0, height)]], dtype=np.int32) cv2.fillPoly(mask, [vertices], 255) masked_edges = cv2.bitwise_and(edges, mask) # 3. 霍夫变换检测直线 lines = cv2.HoughLinesP(masked_edges, rho=1, theta=np.pi / 180, threshold=15, minLineLength=40, maxLineGap=20) # 4. 车道线分类与过滤 left_lines, right_lines = [], [] if lines is not None: for line in lines: x1, y1, x2, y2 = line[0] # 计算斜率并过滤水平线 slope = (y2 - y1) / (x2 - x1 + 1e-5) # 避免除以零 if abs(slope) > 0.5: # 只保留斜率较大的线 if slope < 0 and x1 < width / 2 and x2 < width / 2: left_lines.append([x1, y1, x2, y2]) elif slope > 0 and x1 > width / 2 and x2 > width / 2: right_lines.append([x1, y1, x2, y2]) return left_lines, right_lines def get_ipm_matrix(self, img, calib_file): """ 计算逆透视变换矩阵 :param img: 输入图像(用于获取尺寸) :param calib_file: 标定文件路径 :return: 逆透视变换矩阵 """ height, width = img.shape[:2] # 从标定文件读取相机参数(简化处理) with open(calib_file, 'r') as f: lines = f.readlines() P2 = np.array([float(x) for x in lines[0].split()[1:]]).reshape(3, 4) # 定义源点和目标点(根据KITTI数据集特性) src_points = np.array([ [width * 0.3, height * 0.65], # 左上 [width * 0.7, height * 0.65], # 右上 [width, height], # 右下 [0, height] # 左下 ], dtype=np.float32) # 目标点(鸟瞰图) dst_points = np.array([ [0, 0], # 左上 [width, 0], # 右上 [width, height], # 右下 [0, height] # 左下 ], dtype=np.float32) return cv2.getPerspectiveTransform(src_points, dst_points) def transform_to_world(self, points, pose, ipm_matrix): """ 将图像点转换到世界坐标系 :param points: 图像中的点集 (N, 2) :param pose: 当前帧位姿 (4x4变换矩阵) :param ipm_matrix: 逆透视变换矩阵 :return: 世界坐标系中的点 (N, 2) """ # 应用逆透视变换 homogeneous_points = np.hstack((points, np.ones((points.shape[0], 1)))) ipm_points = (ipm_matrix @ homogeneous_points.T).T ipm_points = ipm_points[:, :2] / ipm_points[:, 2][:, None] # 转换到世界坐标系 homogeneous_3d = np.hstack((ipm_points, np.zeros((ipm_points.shape[0], 1)), np.ones((ipm_points.shape[0], 1)))) world_points = (pose @ homogeneous_3d.T).T return world_points[:, :2] # 只取XY平面 def process_sequence(self, num_frames=None): """ 处理图像序列并构建地图 :param num_frames: 处理的帧数(None表示全部) """ # 读取位姿数据 poses = np.loadtxt(self.poses_file).reshape(-1, 3, 4) # 添加齐次坐标 homogeneous_poses = [] for pose in poses: homogeneous_pose = np.vstack((pose, [0, 0, 0, 1])) homogeneous_poses.append(homogeneous_pose) poses = np.array(homogeneous_poses) # 处理指定数量的帧 num_frames = len(self.image_files) if num_frames is None else min(num_frames, len(self.image_files)) for i in tqdm(range(num_frames), desc="Processing frames"): # 读取图像和标定文件 img = cv2.imread(self.image_files[i]) calib_file = os.path.join(self.calib_dir, f"{i:06d}.txt") # 车道线检测 left_lines, right_lines = self.detect_lanes(img) # 获取逆透视变换矩阵 ipm_matrix = self.get_ipm_matrix(img, calib_file) # 提取车道线点 lane_points = [] for line in left_lines + right_lines: lane_points.append([line[0], line[1]]) lane_points.append([line[2], line[3]]) if lane_points: lane_points = np.array(lane_points) # 转换到世界坐标系 world_points = self.transform_to_world(lane_points, poses[i], ipm_matrix) self.map_points.extend(world_points.tolist()) def visualize_map(self, output_path='lane_map_2d.png'): """ 可视化2D地图 :param output_path: 输出文件路径 """ if not self.map_points: print("No map points to visualize. Run process_sequence first.") return plt.figure(figsize=(12, 8)) points = np.array(self.map_points) # 绘制地图点 plt.scatter(points[:, 0], points[:, 1], s=1, alpha=0.3, c='blue') # 添加轨迹线 plt.title('2D Lane Map from KITTI Dataset') plt.xlabel('X position (meters)') plt.ylabel('Y position (meters)') plt.grid(True) plt.axis('equal') plt.savefig(output_path, dpi=300) plt.close() print(f"Map saved to {output_path}") if __name__ == "__main__": # 使用示例 data_path = 'path/to/your/kitti/dataset' # 替换为你的KITTI数据集路径 mapper = KittiLaneMapper(data_path) # 处理前100帧(可调整) mapper.process_sequence(num_frames=100) # 生成并保存地图 mapper.visualize_map() Traceback (most recent call last): File "C:\Users\12911\PycharmProjects\pythonProject3\detect_1.py", line 187, in <module> mapper.process_sequence(num_frames=100) File "C:\Users\12911\PycharmProjects\pythonProject3\detect_1.py", line 119, in process_sequence poses = np.loadtxt(self.poses_file).reshape(-1, 3, 4) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\Lib\site-packages\numpy\lib\npyio.py", line 1373, in loadtxt arr = _read(fname, dtype=dtype, comment=comment, delimiter=delimiter, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\Lib\site-packages\numpy\lib\npyio.py", line 992, in _read fh = np.lib._datasource.open(fname, 'rt', encoding=encoding) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\Lib\site-packages\numpy\lib\_datasource.py", line 193, in open return ds.open(path, mode, encoding=encoding, newline=newline) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ProgramData\anaconda3\Lib\site-packages\numpy\lib\_datasource.py", line 533, in open raise FileNotFoundError(f"{path} not found.") FileNotFoundError: path/to/your/kitti/dataset\poses.txt not found.

--------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[1], line 1 ----> 1 from keras.datasets import imdb 3 # 加载IMDB数据集(保留最常见的20,000词) 4 max_features = 20000 File A:\anaconda\Lib\site-packages\keras\__init__.py:7 1 """DO NOT EDIT. 2 3 This file was autogenerated. Do not edit it by hand, 4 since your modifications would be overwritten. 5 """ ----> 7 from keras import _tf_keras as _tf_keras 8 from keras import activations as activations 9 from keras import applications as applications File A:\anaconda\Lib\site-packages\keras\_tf_keras\__init__.py:1 ----> 1 from keras._tf_keras import keras File A:\anaconda\Lib\site-packages\keras\_tf_keras\keras\__init__.py:7 1 """DO NOT EDIT. 2 3 This file was autogenerated. Do not edit it by hand, 4 since your modifications would be overwritten. 5 """ ----> 7 from keras import activations as activations 8 from keras import applications as applications 9 from keras import callbacks as callbacks File A:\anaconda\Lib\site-packages\keras\activations\__init__.py:7 1 """DO NOT EDIT. 2 3 This file was autogenerated. Do not edit it by hand, 4 since your modifications would be overwritten. 5 """ ----> 7 from keras.src.activations import deserialize as deserialize 8 from keras.src.activations import get as get 9 from keras.src.activations import serialize as serialize File A:\anaconda\Lib\site-packages\keras\src\__init__.py:1 ----> 1 from keras.src import activations 2 from keras.src import applications 3 from keras.src import backend File A:\anaconda\Lib\site-packages\keras\src\activations\__init__.py:3 1 import types ----> 3 from keras.src.activations.activations import celu 4 from keras.src.activations.activations import elu 5 from keras.src.activations.activations import exponential File A:\anaconda\Lib\site-packages\keras\src\activations\activations.py:1 ----> 1 from keras.src import backend 2 from keras.src import ops 3 from keras.src.api_export import keras_export File A:\anaconda\Lib\site-packages\keras\src\backend\__init__.py:10 7 import torch 9 from keras.src.api_export import keras_export ---> 10 from keras.src.backend.common.dtypes import result_type 11 from keras.src.backend.common.keras_tensor import KerasTensor 12 from keras.src.backend.common.keras_tensor import any_symbolic_tensors File A:\anaconda\Lib\site-packages\keras\src\backend\common\__init__.py:2 1 from keras.src.backend.common import backend_utils ----> 2 from keras.src.backend.common.dtypes import result_type 3 from keras.src.backend.common.variables import AutocastScope 4 from keras.src.backend.common.variables import Variable as KerasVariable File A:\anaconda\Lib\site-packages\keras\src\backend\common\dtypes.py:5 3 from keras.src.api_export import keras_export 4 from keras.src.backend import config ----> 5 from keras.src.backend.common.variables import standardize_dtype 7 BOOL_TYPES = ("bool",) 8 INT_TYPES = ( 9 "uint8", 10 "uint16", (...) 16 "int64", 17 ) File A:\anaconda\Lib\site-packages\keras\src\backend\common\variables.py:11 9 from keras.src.backend.common.stateless_scope import get_stateless_scope 10 from keras.src.backend.common.stateless_scope import in_stateless_scope ---> 11 from keras.src.utils.module_utils import tensorflow as tf 12 from keras.src.utils.naming import auto_name 15 class Variable: File A:\anaconda\Lib\site-packages\keras\src\utils\__init__.py:1 ----> 1 from keras.src.utils.audio_dataset_utils import audio_dataset_from_directory 2 from keras.src.utils.dataset_utils import split_dataset 3 from keras.src.utils.file_utils import get_file File A:\anaconda\Lib\site-packages\keras\src\utils\audio_dataset_utils.py:4 1 import numpy as np 3 from keras.src.api_export import keras_export ----> 4 from keras.src.utils import dataset_utils 5 from keras.src.utils.module_utils import tensorflow as tf 6 from keras.src.utils.module_utils import tensorflow_io as tfio File A:\anaconda\Lib\site-packages\keras\src\utils\dataset_utils.py:9 5 from multiprocessing.pool import ThreadPool 7 import numpy as np ----> 9 from keras.src import tree 10 from keras.src.api_export import keras_export 11 from keras.src.utils import io_utils File A:\anaconda\Lib\site-packages\keras\src\tree\__init__.py:1 ----> 1 from keras.src.tree.tree_api import assert_same_paths 2 from keras.src.tree.tree_api import assert_same_structure 3 from keras.src.tree.tree_api import flatten File A:\anaconda\Lib\site-packages\keras\src\tree\tree_api.py:8 5 from keras.src.utils.module_utils import optree 7 if optree.available: ----> 8 from keras.src.tree import optree_impl as tree_impl 9 elif dmtree.available: 10 from keras.src.tree import dmtree_impl as tree_impl File A:\anaconda\Lib\site-packages\keras\src\tree\optree_impl.py:13 11 # Register backend-specific node classes 12 if backend() == "tensorflow": ---> 13 from tensorflow.python.trackable.data_structures import ListWrapper 14 from tensorflow.python.trackable.data_structures import _DictWrapper 16 try: File A:\anaconda\Lib\site-packages\tensorflow\__init__.py:55 53 from tensorflow._api.v2 import autograph 54 from tensorflow._api.v2 import bitwise ---> 55 from tensorflow._api.v2 import compat 56 from tensorflow._api.v2 import config 57 from tensorflow._api.v2 import data File A:\anaconda\Lib\site-packages\tensorflow\_api\v2\compat\__init__.py:8 3 """Public API for tf._api.v2.compat namespace 4 """ 6 import sys as _sys ----> 8 from tensorflow._api.v2.compat import v1 9 from tensorflow._api.v2.compat import v2 10 from tensorflow.python.compat.compat import forward_compatibility_horizon # line: 125 File A:\anaconda\Lib\site-packages\tensorflow\_api\v2\compat\v1\__init__.py:30 28 from tensorflow._api.v2.compat.v1 import autograph 29 from tensorflow._api.v2.compat.v1 import bitwise ---> 30 from tensorflow._api.v2.compat.v1 import compat 31 from tensorflow._api.v2.compat.v1 import config 32 from tensorflow._api.v2.compat.v1 import data File A:\anaconda\Lib\site-packages\tensorflow\_api\v2\compat\v1\compat\__init__.py:8 3 """Public API for tf._api.v2.compat namespace 4 """ 6 import sys as _sys ----> 8 from tensorflow._api.v2.compat.v1.compat import v1 9 from tensorflow._api.v2.compat.v1.compat import v2 10 from tensorflow.python.compat.compat import forward_compatibility_horizon # line: 125 File A:\anaconda\Lib\site-packages\tensorflow\_api\v2\compat\v1\compat\v1\__init__.py:32 30 from tensorflow._api.v2.compat.v1 import compat 31 from tensorflow._api.v2.compat.v1 import config ---> 32 from tensorflow._api.v2.compat.v1 import data 33 from tensorflow._api.v2.compat.v1 import debugging 34 from tensorflow._api.v2.compat.v1 import distribute File A:\anaconda\Lib\site-packages\tensorflow\_api\v2\compat\v1\data\__init__.py:8 3 """Public API for tf._api.v2.data namespace 4 """ 6 import sys as _sys ----> 8 from tensorflow._api.v2.compat.v1.data import experimental 9 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE # line: 103 10 from tensorflow.python.data.ops.dataset_ops import DatasetV1 as Dataset # line: 3710 File A:\anaconda\Lib\site-packages\tensorflow\_api\v2\compat\v1\data\experimental\__init__.py:32 30 from tensorflow.python.data.experimental.ops.interleave_ops import parallel_interleave # line: 29 31 from tensorflow.python.data.experimental.ops.interleave_ops import sample_from_datasets_v1 as sample_from_datasets # line: 158 ---> 32 from tensorflow.python.data.experimental.ops.iterator_model_ops import get_model_proto # line: 25 33 from tensorflow.python.data.experimental.ops.iterator_ops import make_saveable_from_iterator # line: 38 34 from tensorflow.python.data.experimental.ops.lookup_ops import DatasetInitializer # line: 54 ModuleNotFoundError: No module named 'tensorflow.python.data.experimental.ops.iterator_model_ops' 运行加载Keras中的IMDB数据集的代码报错

"/pkg/qct/software/llvm/release/arm/14.0.0/bin/clang" -g -Os -fshort-wchar -fno-strict-aliasing -Wall -Werror -Wno-array-bounds -c -include AutoGen.h -mlittle-endian -fno-short-enums -save-temps -fverbose-asm -funsigned-char -ffunction-sections -fdata-sections -fno-builtin -Wno-address -fno-asynchronous-unwind-tables -target aarch64-linux-gnu -fcolor-diagnostics -fdiagnostics-format=vi -Wno-parentheses-equality -Wno-tautological-compare -Wno-tautological-constant-out-of-range-compare -Wno-empty-body -Wno-unknown-warning-option -Wno-unused-function -Wno-bitwise-op-parentheses -mcmodel=small -ffixed-x18 -mstrict-align -fstack-protector -Wno-nonportable-include-path -Wno-misleading-indentation -fno-common -mtune=cortex-a53 -I/home/chen-docker/bin/boot/boot_images/BuildLogs/QcomPkg/SocPkg/LeMans/AU/Include -include /home/chen-docker/bin/boot/boot_images/boot/QcomPkg/Include/Library/DebugLib.h -DQCOM_EDK2_PATCH -DDISABLE_DEP -DENABLE_XN -DENABLE_ASLR -DENABLE_DEP_64 -DENABLE_EXEC_CODE_READY_TO_BOOT -DENABLE_AUTO_PLAT -DMAX_DDR_REGIONS=6 -mstrict-align -mcpu=cortex-a53 -DPRODMODE -c -o /home/chen-docker/bin/boot/boot_images/Build/LeMansAU/Core/RELEASE_CLANG140LINUX/AARCH64/MdeModulePkg/Library/UefiHiiLib/UefiHiiLib/OUTPUT/./HiiLib.obj @/home/chen-docker/bin/boot/boot_images/Build/LeMansAU/Core/RELEASE_CLANG140LINUX/AARCH64/MdeModulePkg/Library/UefiHiiLib/UefiHiiLib/OUTPUT/inc.lst /home/chen-docker/bin/boot/boot_images/edk2/MdeModulePkg/Library/UefiHiiLib/HiiLib.c /pkg/qct/software/llvm/release/arm/14.0.0/bin/clang: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory GNUmakefile:373: recipe for target '/home/chen-docker/bin/boot/boot_images/Build/LeMansAU/Core/RELEASE_CLANG140LINUX/AARCH64/MdeModulePkg/Library/UefiHiiLib/UefiHiiLib/OUTPUT/HiiLib.obj' failed Building ... /home/chen-docker/bin/boot/boot_images/edk2/MdeModulePkg/Library/UefiHiiServicesLib/UefiHiiServicesLib.inf [AARCH64] make: *** [/home/chen-docker/bin/boot/boot_images/Build/LeMansAU/Core/RELEASE_CLANG140LINUX/AARCH64/MdeModulePkg/Library/UefiHiiLib/UefiHiiLib/OUTPUT/HiiLib.obj] Error 127 make: Nothing to be done for 'tbuild'. build.py... : error 7000: Failed to execute command make tbuild [/home/chen-docker/bin/boot/boot_images/Build/LeMansAU/Core/RELEASE_CLANG140LINUX/AARCH64/MdeModulePkg/Library/UefiHiiLib/UefiHiiLib]错误在哪里?

ros2中晃动imu,但是我的机器人模型不跟着晃动,而且里程计一开,里程计跑走,但是imu和机器人模型还在原地,以下是我的各个文件代码,情分析一下为什么:wheeltec_robot.cpp文件代码为: #include "turn_on_wheeltec_robot/wheeltec_robot.h" #include "turn_on_wheeltec_robot/Quaternion_Solution.h" #include "wheeltec_robot_msg/msg/data.hpp" sensor_msgs::msg::Imu Mpu6050;//Instantiate an IMU object //实例化IMU对象 using std::placeholders::_1; using namespace std; rclcpp::Node::SharedPtr node_handle = nullptr; //自动回充使用相关变量 bool check_AutoCharge_data = false; bool charge_set_state = false; /************************************** Date: January 28, 2021 Function: The main function, ROS initialization, creates the Robot_control object through the Turn_on_robot class and automatically calls the constructor initialization 功能: 主函数,ROS初始化,通过turn_on_robot类创建Robot_control对象并自动调用构造函数初始化 ***************************************/ int main(int argc, char** argv) { rclcpp::init(argc, argv); //ROS initializes and sets the node name //ROS初始化 并设置节点名称 turn_on_robot Robot_Control;//Instantiate an object //实例化一个对象 Robot_Control.Control();//Loop through data collection and publish the topic //循环执行数据采集和发布话题等操作 return 0; } /************************************** Date: January 28, 2021 Function: Data conversion function 功能: 数据转换函数 ***************************************/ short turn_on_robot::IMU_Trans(uint8_t Data_High,uint8_t Data_Low) { short transition_16; transition_16 = 0; transition_16 |= Data_High<<8; transition_16 |= Data_Low; return transition_16; } float turn_on_robot::Odom_Trans(uint8_t Data_High,uint8_t Data_Low) { float data_return; short transition_16; transition_16 = 0; transition_16 |= Data_High<<8; //Get the high 8 bits of data //获取数据的高8位 transition_16 |= Data_Low; //Get the lowest 8 bits of data //获取数据的低8位 data_return = (transition_16 / 1000)+(transition_16 % 1000)*0.001; // The speed unit is changed from mm/s to m/s //速度单位从mm/s转换为m/s return data_return; } /************************************** Date: January 28, 2021 Function: The speed topic subscription Callback function, according to the subscribed instructions through the serial port command control of the lower computer 功能: 速度话题订阅回调函数Callback,根据订阅的指令通过串口发指令控制下位机 ***************************************/ void turn_on_robot::Cmd_Vel_Callback(const geometry_msgs::msg::Twist::SharedPtr twist_aux) { short transition; //intermediate variable //中间变量 Send_Data.tx[0]= 0x06; //frame head 0x7B //帧头0X7B //Send_Data.tx[1] = AutoRecharge; //set aside //预留位 Send_Data.tx[1] = FRAME_HEADER; Send_Data.tx[2] = 0; //set aside //预留位 //The target velocity of the X-axis of the robot //机器人x轴的目标线速度 transition=0; transition = twist_aux->linear.x*1000; //将浮点数放大一千倍,简化传输 (mm/s) Send_Data.tx[4] = transition; //取数据的低8位 Send_Data.tx[3] = transition>>8; //取数据的高8位 //The target velocity of the Y-axis of the robot //机器人y轴的目标线速度 transition=0; transition = twist_aux->linear.y*1000; Send_Data.tx[6] = transition; Send_Data.tx[5] = transition>>8; //The target angular velocity of the robot's Z axis //机器人z轴的目标角速度 transition=0; transition = twist_aux->angular.z*1000; //(1000*rad/s) Send_Data.tx[8] = transition; Send_Data.tx[7] = transition>>8; Send_Data.tx[9]=Check_Sum(9,SEND_DATA_CHECK); //For the BCC check bits, see the Check_Sum function //BCC校验位,规则参见Check_Sum函数 Send_Data.tx[10]=0x0d; //frame tail 0x7D //帧尾0X7D Send_Data.tx[11]=0x0a; try { Stm32_Serial.write(Send_Data.tx,sizeof (Send_Data.tx)); //Sends data to the downloader via serial port //通过串口向下位机发送数据 } catch (serial::IOException& e) { RCLCPP_ERROR(this->get_logger(),("Unable to send data through serial port")); //If sending data fails, an error message is printed //如果发送数据失败,打印错误信息 } } /************************************** Date: January 28, 2021 Function: Publish the IMU data topic 功能: 发布IMU数据话题 ***************************************/ void turn_on_robot::Publish_ImuSensor() { sensor_msgs::msg::Imu Imu_Data_Pub; //Instantiate IMU topic data //实例化IMU话题数据 Imu_Data_Pub.header.stamp = rclcpp::Node::now(); Imu_Data_Pub.header.frame_id = gyro_frame_id; //IMU corresponds to TF coordinates, which is required to use the robot_pose_ekf feature pack //IMU对应TF坐标,使用robot_pose_ekf功能包需要设置此项 Imu_Data_Pub.orientation.x = Mpu6050.orientation.x; //A quaternion represents a three-axis attitude //四元数表达三轴姿态 Imu_Data_Pub.orientation.y = Mpu6050.orientation.y; Imu_Data_Pub.orientation.z = Mpu6050.orientation.z; Imu_Data_Pub.orientation.w = Mpu6050.orientation.w; Imu_Data_Pub.orientation_covariance[0] = 1e6; //Three-axis attitude covariance matrix //三轴姿态协方差矩阵 Imu_Data_Pub.orientation_covariance[4] = 1e6; Imu_Data_Pub.orientation_covariance[8] = 1e-6; Imu_Data_Pub.angular_velocity.x = Mpu6050.angular_velocity.x; //Triaxial angular velocity //三轴角速度 Imu_Data_Pub.angular_velocity.y = Mpu6050.angular_velocity.y; Imu_Data_Pub.angular_velocity.z = Mpu6050.angular_velocity.z; Imu_Data_Pub.angular_velocity_covariance[0] = 1e6; //Triaxial angular velocity covariance matrix //三轴角速度协方差矩阵 Imu_Data_Pub.angular_velocity_covariance[4] = 1e6; Imu_Data_Pub.angular_velocity_covariance[8] = 1e-6; Imu_Data_Pub.linear_acceleration.x = Mpu6050.linear_acceleration.x; //Triaxial acceleration //三轴线性加速度 Imu_Data_Pub.linear_acceleration.y = Mpu6050.linear_acceleration.y; Imu_Data_Pub.linear_acceleration.z = Mpu6050.linear_acceleration.z; imu_publisher->publish(Imu_Data_Pub); //Pub IMU topic //发布IMU话题 } /************************************** Date: January 28, 2021 Function: Publish the odometer topic, Contains position, attitude, triaxial velocity, angular velocity about triaxial, TF parent-child coordinates, and covariance matrix 功能: 发布里程计话题,包含位置、姿态、三轴速度、绕三轴角速度、TF父子坐标、协方差矩阵 ***************************************/ void turn_on_robot::Publish_Odom() { //Convert the Z-axis rotation Angle into a quaternion for expression //把Z轴转角转换为四元数进行表达 tf2::Quaternion q; q.setRPY(0,0,Mpu6050_Data.imu_yaw); geometry_msgs::msg::Quaternion odom_quat=tf2::toMsg(q); nav_msgs::msg::Odometry odom; //Instance the odometer topic data //实例化里程计话题数据 odom.header.stamp = rclcpp::Node::now(); ; odom.header.frame_id = odom_frame_id; // Odometer TF parent coordinates //里程计TF父坐标 odom.pose.pose.position.x = Robot_Pos.X; //Position //位置 odom.pose.pose.position.y = Robot_Pos.Y; //odom.pose.pose.position.z = Robot_Pos.Z; odom.pose.pose.orientation = odom_quat; //Posture, Quaternion converted by Z-axis rotation //姿态,通过Z轴转角转换的四元数 odom.child_frame_id = robot_frame_id; // Odometer TF subcoordinates //里程计TF子坐标 odom.twist.twist.linear.x = Robot_Vel.X; //Speed in the X direction //X方向速度 //odom.twist.twist.linear.y = Robot_Vel.Y; //Speed in the Y direction //Y方向速度 odom.twist.twist.angular.z = Mpu6050.angular_velocity.z; //Angular velocity around the Z axis //绕Z轴角速度 //There are two types of this matrix, which are used when the robot is at rest and when it is moving.Extended Kalman Filtering officially provides 2 matrices for the robot_pose_ekf feature pack //这个矩阵有两种,分别在机器人静止和运动的时候使用。扩展卡尔曼滤波官方提供的2个矩阵,用于robot_pose_ekf功能包 //if(Robot_Vel.X== 0&&Robot_Vel.Y== 0&&Robot_Vel.Z== 0) if(Robot_Vel.X== 0) //If the velocity is zero, it means that the error of the encoder will be relatively small, and the data of the encoder will be considered more reliable //如果velocity是零,说明编码器的误差会比较小,认为编码器数据更可靠 memcpy(&odom.pose.covariance, odom_pose_covariance2, sizeof(odom_pose_covariance2)), memcpy(&odom.twist.covariance, odom_twist_covariance2, sizeof(odom_twist_covariance2)); else //If the velocity of the trolley is non-zero, considering the sliding error that may be brought by the encoder in motion, the data of IMU is considered to be more reliable //如果小车velocity非零,考虑到运动中编码器可能带来的滑动误差,认为imu的数据更可靠 memcpy(&odom.pose.covariance, odom_pose_covariance, sizeof(odom_pose_covariance)), memcpy(&odom.twist.covariance, odom_twist_covariance, sizeof(odom_twist_covariance)); odom_publisher->publish(odom); //Pub odometer topic //发布里程计话题 } /************************************** Date: January 28, 2021 Function: Publish voltage-related information 功能: 发布电压相关信息 ***************************************/ void turn_on_robot::Publish_Voltage() { std_msgs::msg::Float32 voltage_msgs; //Define the data type of the power supply voltage publishing topic //定义电源电压发布话题的数据类型 static float Count_Voltage_Pub=0; if(Count_Voltage_Pub++>10) { Count_Voltage_Pub=0; voltage_msgs.data = Power_voltage; //The power supply voltage is obtained //电源供电的电压获取 voltage_publisher->publish(voltage_msgs); //Post the power supply voltage topic unit: V, volt //发布电源电压话题单位:V、伏特 } } ////////// 回充发布与回调 //////// /************************************** Date: January 17, 2022 Function: Pub the topic whether the robot finds the infrared signal (charging station) 功能: 发布机器人是否寻找到红外信号(充电桩)的话题 ***************************************/ void turn_on_robot::Publish_RED() { std_msgs::msg::UInt8 msg; msg.data=Red; RED_publisher->publish(msg); } /************************************** Date: January 14, 2022 Function: Publish a topic about whether the robot is charging 功能: 发布机器人是否在充电的话题 ***************************************/ void turn_on_robot::Publish_Charging() { static bool last_charging; std_msgs::msg::Bool msg; msg.data=Charging; Charging_publisher->publish(msg); if(last_charging==false && Charging==true)cout<<GREEN<<"Robot is charging."<<endl<<RESET; if(last_charging==true && Charging==false)cout<<RED <<"Robot charging has disconnected."<<endl<<RESET; last_charging=Charging; } /************************************** Date: January 28, 2021 Function: Publish charging current information 功能: 发布充电电流信息 ***************************************/ void turn_on_robot::Publish_ChargingCurrent() { std_msgs::msg::Float32 msg; msg.data=Charging_Current; Charging_current_publisher->publish(msg); } /************************************** Date: March 1, 2022 Function: Infrared connection speed topic subscription Callback function, according to the subscription command through the serial port to set the infrared connection speed 功能: 红外对接速度话题订阅回调函数Callback,根据订阅的指令通过串口发指令设置红外对接速度 ***************************************/ void turn_on_robot::Red_Vel_Callback(const geometry_msgs::msg::Twist::SharedPtr twist_aux) { short transition; //intermediate variable //中间变量 Send_Data.tx[0]=FRAME_HEADER; //frame head 0x7B //帧头0X7B Send_Data.tx[1] = 3; //Infrared docking speed setting flag bit = 3 //红外对接速度设置标志位=3 Send_Data.tx[2] = 0; //set aside //预留位 //The target velocity of the X-axis of the robot //机器人x轴的目标线速度 transition=0; transition = twist_aux->linear.x*1000; //将浮点数放大一千倍,简化传输 Send_Data.tx[4] = transition; //取数据的低8位 Send_Data.tx[3] = transition>>8; //取数据的高8位 //The target velocity of the Y-axis of the robot //机器人y轴的目标线速度 transition=0; transition = twist_aux->linear.y*1000; Send_Data.tx[6] = transition; Send_Data.tx[5] = transition>>8; //The target angular velocity of the robot's Z axis //机器人z轴的目标角速度 transition=0; transition = twist_aux->angular.z*1000; Send_Data.tx[8] = transition; Send_Data.tx[7] = transition>>8; Send_Data.tx[9]=Check_Sum(9,SEND_DATA_CHECK); //BCC check //BCC校验 Send_Data.tx[10]=FRAME_TAIL; //frame tail 0x7D //帧尾0X7D try { Stm32_Serial.write(Send_Data.tx,sizeof (Send_Data.tx)); //Sends data to the downloader via serial port //通过串口向下位机发送数据 } catch (serial::IOException& e) { RCLCPP_ERROR(this->get_logger(),("Unable to send data through serial port")); //If sending data fails, an error message is printed //如果发送数据失败,打印错误信息 } } /************************************** Date: January 14, 2022 Function: Subscription robot recharge flag bit topic, used to tell the lower machine speed command is normal command or recharge command 功能: 订阅机器人是否回充标志位话题,用于告诉下位机速度命令是正常命令还是回充命令 ***************************************/ void turn_on_robot::Recharge_Flag_Callback(const std_msgs::msg::Int8::SharedPtr Recharge_Flag) { AutoRecharge=Recharge_Flag->data; } //服务 void turn_on_robot::Set_Charge_Callback(const shared_ptr<turtlesim::srv::Spawn::Request> req,shared_ptr<turtlesim::srv::Spawn::Response> res) { Send_Data.tx[0]=FRAME_HEADER; //frame head 0x7B //֡ͷ0X7B if(round(req->x)==1) Send_Data.tx[1] = 1; else if(round(req->x)==2) Send_Data.tx[1] = 2; else if(round(req->x)==0) Send_Data.tx[1] = 0,AutoRecharge=0; Send_Data.tx[2] = 0; Send_Data.tx[3] = 0; Send_Data.tx[4] = 0; Send_Data.tx[5] = 0; Send_Data.tx[6] = 0; Send_Data.tx[7] = 0; Send_Data.tx[8] = 0; Send_Data.tx[9]=Check_Sum(9,SEND_DATA_CHECK); //For the BCC check bits, see the Check_Sum function //BCCУ��λ������μ�Check_Sum���� Send_Data.tx[10]=FRAME_TAIL; //frame tail 0x7D //֡β0X7D try { Stm32_Serial.write(Send_Data.tx,sizeof (Send_Data.tx)); //Sends data to the downloader via serial port //ͨ����������λ���������� } catch (serial::IOException& e) { res->name = "false"; } if( Send_Data.tx[1]==0 ) { if(charge_set_state==0) AutoRecharge=0,res->name = "true"; else res->name = "false"; } else { if(charge_set_state==1) res->name = "true"; else res->name = "false"; } return; } ////////// 回充发布与回调 //////// /************************************** Date: January 28, 2021 Function: Serial port communication check function, packet n has a byte, the NTH -1 byte is the check bit, the NTH byte bit frame end.Bit XOR results from byte 1 to byte n-2 are compared with byte n-1, which is a BCC check Input parameter: Count_Number: Check the first few bytes of the packet 功能: 串口通讯校验函数,数据包n有个字节,第n-1个字节为校验位,第n个字节位帧尾。第1个字节到第n-2个字节数据按位异或的结果与第n-1个字节对比,即为BCC校验 输入参数: Count_Number:数据包前几个字节加入校验 mode:对发送数据还是接收数据进行校验 ***************************************/ unsigned char turn_on_robot::Check_Sum(unsigned char Count_Number,unsigned char mode) { unsigned char check_sum=0,k; if(mode==0) //Receive data mode //接收数据模式 { for(k=0;k<Count_Number;k++) { check_sum=check_sum^Receive_Data.rx[k]; //By bit or by bit //按位异或 } } if(mode==1) //Send data mode //发送数据模式 { for(k=1;k<Count_Number;k++) { check_sum=check_sum^Send_Data.tx[k]; //By bit or by bit //按位异或 } } return check_sum; //Returns the bitwise XOR result //返回按位异或结果 } //自动回充专用校验位 unsigned char turn_on_robot::Check_Sum_AutoCharge(unsigned char Count_Number,unsigned char mode) { unsigned char check_sum=0,k; if(mode==0) //Receive data mode //接收数据模式 { for(k=0;k<Count_Number;k++) { check_sum=check_sum^Receive_AutoCharge_Data.rx[k]; //By bit or by bit //按位异或 } } return check_sum; } /************************************** Date: November 18, 2021 Function: Read and verify the data sent by the lower computer frame by frame through the serial port, and then convert the data into international units 功能: 通过串口读取并逐帧校验下位机发送过来的数据,然后数据转换为国际单位 ***************************************/ bool turn_on_robot::Get_Sensor_Data_New() { short transition_16=0; //Intermediate variable //中间变量 //uint8_t i=0; uint8_t check=0,check2=0, error=1,error2=1,Receive_Data_Pr[1]; //Temporary variable to save the data of the lower machine //临时变量,保存下位机数据 static int count,count2; //Static variable for counting //静态变量,用于计数 Stm32_Serial.read(Receive_Data_Pr,sizeof(Receive_Data_Pr)); //Read the data sent by the lower computer through the serial port //通过串口读取下位机发送过来的数据 /*//View the received raw data directly and debug it for use//直接查看接收到的原始数据,调试使用 ROS_INFO("%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x", Receive_Data_Pr[0],Receive_Data_Pr[1],Receive_Data_Pr[2],Receive_Data_Pr[3],Receive_Data_Pr[4],Receive_Data_Pr[5],Receive_Data_Pr[6],Receive_Data_Pr[7], Receive_Data_Pr[8],Receive_Data_Pr[9],Receive_Data_Pr[10],Receive_Data_Pr[11],Receive_Data_Pr[12],Receive_Data_Pr[13],Receive_Data_Pr[14],Receive_Data_Pr[15], Receive_Data_Pr[16],Receive_Data_Pr[17],Receive_Data_Pr[18],Receive_Data_Pr[19],Receive_Data_Pr[20],Receive_Data_Pr[21],Receive_Data_Pr[22],Receive_Data_Pr[23]); */ Receive_Data.rx[count] = Receive_Data_Pr[0]; //Fill the array with serial data //串口数据填入数组 Receive_AutoCharge_Data.rx[count2] = Receive_Data_Pr[0]; Receive_Data.Frame_Header = Receive_Data.rx[0]; //The first part of the data is the frame header 0X7B //数据的第一位是帧头0X7B Receive_Data.Frame_Tail = Receive_Data.rx[23]; //The last bit of data is frame tail 0X7D //数据的最后一位是帧尾0X7D //接收到自动回充数据的帧头、上一个数据是24字节的帧尾,表明自动回充数据开始到来 if((Receive_Data_Pr[0] == AutoCharge_HEADER )||count2>0) count2++; else count2=0; if(Receive_Data_Pr[0] == FRAME_HEADER || count>0) //Ensure that the first data in the array is FRAME_HEADER //确保数组第一个数据为FRAME_HEADER count++; else count=0; //自动回充数据处理 if(count2 == AutoCharge_DATA_SIZE) { count2=0; if(Receive_AutoCharge_Data.rx[AutoCharge_DATA_SIZE-1]==AutoCharge_TAIL) //确认帧尾 { check2 = Check_Sum_AutoCharge(6,0);//校验位计算 if(check2 == Receive_AutoCharge_Data.rx[AutoCharge_DATA_SIZE-2]) //校验正确 { error2=0; } if(error2 == 0) //校验正确开始赋值 { transition_16 = 0; transition_16 |= Receive_AutoCharge_Data.rx[1]<<8; transition_16 |= Receive_AutoCharge_Data.rx[2]; Charging_Current = transition_16/1000+(transition_16 % 1000)*0.001; //充电电流 Red = Receive_AutoCharge_Data.rx[3]; //红外接受状态 Charging = Receive_AutoCharge_Data.rx[4];//小车充电状态 charge_set_state = Receive_AutoCharge_Data.rx[5]; check_AutoCharge_data = true; //数据成功接收标志位 } } } if(count == 24) //Verify the length of the packet //验证数据包的长度 { count=0; //Prepare for the serial port data to be refill into the array //为串口数据重新填入数组做准备 if(Receive_Data.Frame_Tail == FRAME_TAIL) //Verify the frame tail of the packet //验证数据包的帧尾 { check=Check_Sum(22,READ_DATA_CHECK); //BCC check passes or two packets are interlaced //BCC校验通过或者两组数据包交错 if(check == Receive_Data.rx[22]) { error=0; //XOR bit check successful //异或位校验成功 } if(error == 0) { /*//Check receive_data.rx for debugging use //查看Receive_Data.rx,调试使用 ROS_INFO("%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x-%x", Receive_Data.rx[0],Receive_Data.rx[1],Receive_Data.rx[2],Receive_Data.rx[3],Receive_Data.rx[4],Receive_Data.rx[5],Receive_Data.rx[6],Receive_Data.rx[7], Receive_Data.rx[8],Receive_Data.rx[9],Receive_Data.rx[10],Receive_Data.rx[11],Receive_Data.rx[12],Receive_Data.rx[13],Receive_Data.rx[14],Receive_Data.rx[15], Receive_Data.rx[16],Receive_Data.rx[17],Receive_Data.rx[18],Receive_Data.rx[19],Receive_Data.rx[20],Receive_Data.rx[21],Receive_Data.rx[22],Receive_Data.rx[23]); */ Receive_Data.Flag_Stop=Receive_Data.rx[1]; //set aside //预留位 Robot_Vel.X = Odom_Trans(Receive_Data.rx[2],Receive_Data.rx[3]); //Get the speed of the moving chassis in the X direction //获取运动底盘X方向速度 //Robot_Vel.Y = Odom_Trans(Receive_Data.rx[4],Receive_Data.rx[5]); //Get the speed of the moving chassis in the Y direction, The Y speed is only valid in the omnidirectional mobile robot chassis //获取运动底盘Y方向速度,Y速度仅在全向移动机器人底盘有效 //Robot_Vel.Z = Odom_Trans(Receive_Data.rx[6],Receive_Data.rx[7]); //Get the speed of the moving chassis in the Z direction //获取运动底盘Z方向速度 //MPU6050 stands for IMU only and does not refer to a specific model. It can be either MPU6050 or MPU9250 //Mpu6050仅代表IMU,不指代特定型号,既可以是MPU6050也可以是MPU9250 Mpu6050_Data.imu_rol = IMU_Trans(Receive_Data.rx[4],Receive_Data.rx[5]) * 0.01 * 3.14159/180; Mpu6050_Data.imu_yaw = IMU_Trans(Receive_Data.rx[6],Receive_Data.rx[7]) * 0.01 * 3.14159/180; Mpu6050_Data.accele_x_data = IMU_Trans(Receive_Data.rx[8],Receive_Data.rx[9]); //Get the X-axis acceleration of the IMU //获取IMU的X轴加速度 Mpu6050_Data.accele_y_data = IMU_Trans(Receive_Data.rx[10],Receive_Data.rx[11]); //Get the Y-axis acceleration of the IMU //获取IMU的Y轴加速度 Mpu6050_Data.accele_z_data = IMU_Trans(Receive_Data.rx[12],Receive_Data.rx[13]); //Get the Z-axis acceleration of the IMU //获取IMU的Z轴加速度 Mpu6050_Data.gyros_x_data = IMU_Trans(Receive_Data.rx[14],Receive_Data.rx[15]); //Get the X-axis angular velocity of the IMU //获取IMU的X轴角速度 Mpu6050_Data.gyros_y_data = IMU_Trans(Receive_Data.rx[16],Receive_Data.rx[17]); //Get the Y-axis angular velocity of the IMU //获取IMU的Y轴角速度 Mpu6050_Data.gyros_z_data = IMU_Trans(Receive_Data.rx[18],Receive_Data.rx[19]); //Get the Z-axis angular velocity of the IMU //获取IMU的Z轴角速度 //Linear acceleration unit conversion is related to the range of IMU initialization of STM32, where the range is ±2g=19.6m/s^2 //线性加速度单位转化,和STM32的IMU初始化的时候的量程有关,这里量程±2g=19.6m/s^2 Mpu6050.linear_acceleration.x = Mpu6050_Data.accele_x_data * 0.01 * 9.8; Mpu6050.linear_acceleration.y = Mpu6050_Data.accele_y_data * 0.01 * 9.8; Mpu6050.linear_acceleration.z = Mpu6050_Data.accele_z_data * 0.01 * 9.8; //The gyroscope unit conversion is related to the range of STM32's IMU when initialized. Here, the range of IMU's gyroscope is ±500°/s //Because the robot generally has a slow Z-axis speed, reducing the range can improve the accuracy //陀螺仪单位转化,和STM32的IMU初始化的时候的量程有关,这里IMU的陀螺仪的量程是±500°/s //因为机器人一般Z轴速度不快,降低量程可以提高精度 Mpu6050.angular_velocity.x = Mpu6050_Data.gyros_x_data * 0.01 *3.14159/180; Mpu6050.angular_velocity.y = Mpu6050_Data.gyros_y_data * 0.01 *3.14159/180; Mpu6050.angular_velocity.z = Mpu6050_Data.gyros_z_data * 0.01 *3.14159/180; //Get the battery voltage //获取电池电压 transition_16 = 0; transition_16 |= Receive_Data.rx[20]<<8; transition_16 |= Receive_Data.rx[21]; Power_voltage = transition_16/1000+(transition_16 % 1000)*0.001; //Unit conversion millivolt(mv)->volt(v) //单位转换毫伏(mv)->伏(v) return true; } } } return false; } /************************************** Date: January 28, 2021 Function: Loop access to the lower computer data and issue topics 功能: 循环获取下位机数据与发布话题 ***************************************/ void turn_on_robot::Control() { //_Last_Time = ros::Time::now(); _Last_Time = rclcpp::Node::now(); while(rclcpp::ok()) { try { //_Now = ros::Time::now(); _Now = rclcpp::Node::now(); Sampling_Time = (_Now - _Last_Time).seconds(); //Retrieves time interval, which is used to integrate velocity to obtain displacement (mileage) //获取时间间隔,用于积分速度获得位移(里程) if (true == Get_Sensor_Data_New()) //The serial port reads and verifies the data sent by the lower computer, and then the data is converted to international units //通过串口读取并校验下位机发送过来的数据,然后数据转换为国际单位 { //Robot_Pos.X+=(Robot_Vel.X * cos(Robot_Pos.Z) - Robot_Vel.Y * sin(Robot_Pos.Z)) * Sampling_Time; //Calculate the displacement in the X direction, unit: m //计算X方向的位移,单位:m //Robot_Pos.Y+=(Robot_Vel.X * sin(Robot_Pos.Z) + Robot_Vel.Y * cos(Robot_Pos.Z)) * Sampling_Time; //Calculate the displacement in the Y direction, unit: m //计算Y方向的位移,单位:m //Robot_Pos.Z+=Robot_Vel.Z * Sampling_Time; //The angular displacement about the Z axis, in rad //绕Z轴的角位移,单位:rad Robot_Pos.X+=(Robot_Vel.X * cos(Mpu6050_Data.imu_yaw)) * Sampling_Time; //Calculate the displacement in the X direction, unit: m //计算X方向的位移,单位:m Robot_Pos.Y+=(Robot_Vel.X * sin(Mpu6050_Data.imu_yaw)) * Sampling_Time; //Calculate the displacement in the Y direction, unit: m //计算Y方向的位移,单位:m //Robot_Pos.Z = 0 ; //Calculate the three-axis attitude from the IMU with the angular velocity around the three-axis and the three-axis acceleration //通过IMU绕三轴角速度与三轴加速度计算三轴姿态 Quaternion_Solution(Mpu6050.angular_velocity.x, Mpu6050.angular_velocity.y, Mpu6050.angular_velocity.z,\ Mpu6050.linear_acceleration.x, Mpu6050.linear_acceleration.y, Mpu6050.linear_acceleration.z); Publish_Odom(); //Pub the speedometer topic //发布里程计话题 Publish_ImuSensor(); //Pub the IMU topic //发布IMU话题 Publish_Voltage(); //Pub the topic of power supply voltage //发布电源电压话题 _Last_Time = _Now; //Record the time and use it to calculate the time interval //记录时间,用于计算时间间隔 } //自动回充数据话题 if(check_AutoCharge_data) { Publish_Charging(); //Pub a topic about whether the robot is charging //发布机器人是否在充电的话题 Publish_RED(); //Pub the topic whether the robot finds the infrared signal (charging station) //发布机器人是否寻找到红外信号(充电桩)的话题 Publish_ChargingCurrent(); //Pub the charging current topic //发布充电电流话题 check_AutoCharge_data = false; } rclcpp::spin_some(this->get_node_base_interface()); //The loop waits for the callback function //循环等待回调函数 } catch (const rclcpp::exceptions::RCLError & e ) { RCLCPP_ERROR(this->get_logger(),"unexpectedly failed whith %s",e.what()); } } } /************************************** Date: January 28, 2021 Function: Constructor, executed only once, for initialization 功能: 构造函数, 只执行一次,用于初始化 ***************************************/ turn_on_robot::turn_on_robot():rclcpp::Node ("wheeltec_robot") { Sampling_Time=0; Power_voltage=0; //Clear the data //清空数据 memset(&Robot_Pos, 0, sizeof(Robot_Pos)); memset(&Robot_Vel, 0, sizeof(Robot_Vel)); memset(&Receive_Data, 0, sizeof(Receive_Data)); memset(&Send_Data, 0, sizeof(Send_Data)); memset(&Mpu6050_Data, 0, sizeof(Mpu6050_Data)); //ros::NodeHandle private_nh("~"); //Create a node handle //创建节点句柄 //The private_nh.param() entry parameter corresponds to the initial value of the name of the parameter variable on the parameter server //private_nh.param()入口参数分别对应:参数服务器上的名称 参数变量名 初始值 this->declare_parameter<int>("serial_baud_rate",115200); this->declare_parameter<std::string>("usart_port_name", "/dev/ttyACM0"); this->declare_parameter<std::string>("odom_frame_id", "odom_combined"); this->declare_parameter<std::string>("robot_frame_id", "base_footprint"); this->declare_parameter<std::string>("gyro_frame_id", "gyro_link"); this->get_parameter("serial_baud_rate", serial_baud_rate);//Communicate baud rate 115200 to the lower machine //和下位机通信波特率115200 this->get_parameter("usart_port_name", usart_port_name);//Fixed serial port number //固定串口号 this->get_parameter("odom_frame_id", odom_frame_id);//The odometer topic corresponds to the parent TF coordinate //里程计话题对应父TF坐标 this->get_parameter("robot_frame_id", robot_frame_id);//The odometer topic corresponds to sub-TF coordinates //里程计话题对应子TF坐标 this->get_parameter("gyro_frame_id", gyro_frame_id);//IMU topics correspond to TF coordinates //IMU话题对应TF坐标 odom_publisher = create_publisher("odom", 2);//Create the odometer topic publisher //创建里程计话题发布者 imu_publisher = create_publisher<sensor_msgs::msg::Imu>("imu/data_raw", 2); //Create an IMU topic publisher //创建IMU话题发布者 voltage_publisher = create_publisher<std_msgs::msg::Float32>("PowerVoltage", 1);//Create a battery-voltage topic publisher //创建电池电压话题发布者 //回充发布者 Charging_publisher = create_publisher<std_msgs::msg::Bool>("robot_charging_flag", 10); Charging_current_publisher = create_publisher<std_msgs::msg::Float32>("robot_charging_current", 10); RED_publisher = create_publisher<std_msgs::msg::UInt8>("robot_red_flag", 10); //回充订阅者 Red_Vel_Sub = create_subscription<geometry_msgs::msg::Twist>( "red_vel", 10, std::bind(&turn_on_robot::Red_Vel_Callback, this, std::placeholders::_1)); Recharge_Flag_Sub = create_subscription<std_msgs::msg::Int8>( "robot_recharge_flag", 10, std::bind(&turn_on_robot::Recharge_Flag_Callback, this,std::placeholders::_1)); //回充服务提供 SetCharge_Service=this->create_service<turtlesim::srv::Spawn>\ ("/set_charge",std::bind(&turn_on_robot::Set_Charge_Callback,this, std::placeholders::_1 ,std::placeholders::_2)); //Set the velocity control command callback function //速度控制命令订阅回调函数设置 Cmd_Vel_Sub = create_subscription<geometry_msgs::msg::Twist>( "cmd_vel", 2, std::bind(&turn_on_robot::Cmd_Vel_Callback, this, _1)); RCLCPP_INFO(this->get_logger(),"wheeltec_robot Data ready"); //Prompt message //提示信息 try { //Attempts to initialize and open the serial port //尝试初始化与开启串口 Stm32_Serial.setPort(usart_port_name); //Select the serial port number to enable //选择要开启的串口号 Stm32_Serial.setBaudrate(serial_baud_rate); //Set the baud rate //设置波特率 serial::Timeout _time = serial::Timeout::simpleTimeout(2000); //Timeout //超时等待 Stm32_Serial.setTimeout(_time); Stm32_Serial.open(); //Open the serial port //开启串口 } catch (serial::IOException& e) { RCLCPP_ERROR(this->get_logger(),"wheeltec_robot can not open serial port,Please check the serial port cable! "); //If opening the serial port fails, an error message is printed //如果开启串口失败,打印错误信息 } if(Stm32_Serial.isOpen()) { RCLCPP_INFO(this->get_logger(),"wheeltec_robot serial port opened"); //Serial port opened successfully //串口开启成功提示 } } /************************************** Date: January 28, 2021 Function: Destructor, executed only once and called by the system when an object ends its life cycle 功能: 析构函数,只执行一次,当对象结束其生命周期时系统会调用这个函数 ***************************************/ turn_on_robot::~turn_on_robot() { //Sends the stop motion command to the lower machine before the turn_on_robot object ends //对象turn_on_robot结束前向下位机发送停止运动命令 Send_Data.tx[0]=0x06; Send_Data.tx[1] = FRAME_HEADER; Send_Data.tx[2] = 0; //The target velocity of the X-axis of the robot //机器人X轴的目标线速度 Send_Data.tx[4] = 0; Send_Data.tx[3] = 0; //The target velocity of the Y-axis of the robot //机器人Y轴的目标线速度 Send_Data.tx[6] = 0; Send_Data.tx[5] = 0; //The target velocity of the Z-axis of the robot //机器人Z轴的目标角速度 Send_Data.tx[8] = 0; Send_Data.tx[7] = 0; Send_Data.tx[9]=Check_Sum(9,SEND_DATA_CHECK); //Check the bits for the Check_Sum function //校验位,规则参见Check_Sum函数 Send_Data.tx[10]=0x0d; Send_Data.tx[11]=0x0a; try { Stm32_Serial.write(Send_Data.tx,sizeof (Send_Data.tx)); //Send data to the serial port //向串口发数据 } catch (serial::IOException& e) { RCLCPP_ERROR(this->get_logger(),"Unable to send data through serial port"); //If sending data fails, an error message is printed //如果发送数据失败,打印错误信息 } Stm32_Serial.close(); //Close the serial port //关闭串口 RCLCPP_INFO(this->get_logger(),"Shutting down"); //Prompt message //提示信息 } turn_on_wheeltec_robot.launch.py文件代码为: import os from pathlib import Path import launch from launch.actions import SetEnvironmentVariable from ament_index_python.packages import get_package_share_directory from launch import LaunchDescription from launch.actions import (DeclareLaunchArgument, GroupAction, IncludeLaunchDescription, SetEnvironmentVariable) from launch.launch_description_sources import PythonLaunchDescriptionSource from launch.substitutions import LaunchConfiguration, PythonExpression from launch_ros.actions import PushRosNamespace import launch_ros.actions from launch.conditions import IfCondition from launch.conditions import UnlessCondition def generate_launch_description(): # Get the launch directory bringup_dir = get_package_share_directory('turn_on_wheeltec_robot') launch_dir = os.path.join(bringup_dir, 'launch') ekf_config = Path(get_package_share_directory('turn_on_wheeltec_robot'), 'config', 'ekf.yaml') ekf_carto_config = Path(get_package_share_directory('turn_on_wheeltec_robot'), 'config', 'ekf_carto.yaml') imu_config = Path(get_package_share_directory('turn_on_wheeltec_robot'), 'config', 'imu.yaml') carto_slam = LaunchConfiguration('carto_slam', default='false') carto_slam_dec = DeclareLaunchArgument('carto_slam',default_value='false') wheeltec_robot = IncludeLaunchDescription( PythonLaunchDescriptionSource(os.path.join(launch_dir, 'base_serial.launch.py')), ) robot_ekf = IncludeLaunchDescription( PythonLaunchDescriptionSource(os.path.join(launch_dir, 'wheeltec_ekf.launch.py')), launch_arguments={'carto_slam':carto_slam}.items(), ) base_to_link = launch_ros.actions.Node( package='tf2_ros', executable='static_transform_publisher', name='base_to_link', arguments=['0', '0', '0','0', '0','0','base_footprint','base_link'], ) base_to_gyro = launch_ros.actions.Node( package='tf2_ros', executable='static_transform_publisher', name='base_to_gyro', arguments=['0', '0', '0','0', '0','0','base_link','gyro_link'], ) imu_filter_node = launch_ros.actions.Node( package='imu_filter_madgwick', executable='imu_filter_madgwick_node', parameters=[imu_config] ) # joint_state_publisher_node = launch_ros.actions.Node( # package='joint_state_publisher', # executable='joint_state_publisher', # name='joint_state_publisher', # ) #select a robot model,the default model is mini_mec #minibot.launch.py contains commonly used robot models #launch_arguments choices:mini_mec/mini_akm/mini_tank/mini_4wd/mini_diff/mini_omni/brushless_senior_diff #!!!At the same time, you need to modify ld.add_action(minibot_type) and #ld.add_action(flagship_type) minibot_type = IncludeLaunchDescription( PythonLaunchDescriptionSource(os.path.join(launch_dir, 'robot_mode_description_minibot.launch.py')), launch_arguments={'bike_robot': 'true'}.items(), ) #robot_mode_description.launch.py contains flagship products, usually larger chassis robots #launch_arguments choices: #senior_akm/top_akm_bs/top_akm_dl #senior_mec_bs/senior_mec_dl/top_mec_bs/top_mec_dl/ mec_EightDrive_robot/flagship_mec_bs_robot/flagship_mec_dl_robot�� #senior_omni/top_omni #senior_4wd_bs_robot/senior_4wd_dl_robot/flagship_4wd_bs_robot/flagship_4wd_dl_robot/top_4wd_bs_robot/top_4wd_dl_robot #senior_diff_robot/four_wheel_diff_bs/four_wheel_diff_dl/flagship_four_wheel_diff_bs_robot/flagship_four_wheel_diff_dl_robot #!!!At the same time, you need to modify ld.add_action(flagship_type) and #ld.add_action(minibot_type) flagship_type = IncludeLaunchDescription( PythonLaunchDescriptionSource(os.path.join(launch_dir, 'robot_mode_description.launch.py')), launch_arguments={'senior_akm': 'true'}.items(), ) ld = LaunchDescription() ld.add_action(minibot_type) #ld.add_action(flagship_type) ld.add_action(carto_slam_dec) ld.add_action(wheeltec_robot) ld.add_action(base_to_link) ld.add_action(base_to_gyro) # ld.add_action(joint_state_publisher_node) ld.add_action(imu_filter_node) ld.add_action(robot_ekf) return ld base_serial.launch.py文件代码为: from launch import LaunchDescription from launch.actions import DeclareLaunchArgument from launch.substitutions import LaunchConfiguration from launch.conditions import IfCondition from launch.conditions import UnlessCondition import launch_ros.actions #def launch(launch_descriptor, argv): def generate_launch_description(): return LaunchDescription([ launch_ros.actions.Node( package='turn_on_wheeltec_robot', executable='wheeltec_robot_node', output='screen', parameters=[{'usart_port_name': '/dev/ttyACM0', 'serial_baud_rate':115200, 'robot_frame_id': 'base_footprint', 'odom_frame_id': 'odom_combined', 'cmd_vel': 'cmd_vel',}], ) ]) robot_mode_description_minibot.launch.py文件代码为: import os from ament_index_python.packages import get_package_share_directory import launch_ros.actions from launch import LaunchDescription from launch.actions import DeclareLaunchArgument, GroupAction, SetEnvironmentVariable from launch.conditions import IfCondition from launch.substitutions import LaunchConfiguration, PythonExpression from launch_ros.actions import LoadComposableNodes from launch_ros.actions import Node from launch_ros.descriptions import ComposableNode from nav2_common.launch import RewrittenYaml def generate_robot_node(robot_urdf,child): return launch_ros.actions.Node( package='robot_state_publisher', executable='robot_state_publisher', name=f'robot_state_publisher_{child}', arguments=[os.path.join(get_package_share_directory('wheeltec_robot_urdf'), 'urdf', robot_urdf)], ) def generate_static_transform_publisher_node(translation, rotation, parent, child): return launch_ros.actions.Node( package='tf2_ros', executable='static_transform_publisher', name=f'base_to_{child}', arguments=[translation[0], translation[1], translation[2], rotation[0], rotation[1], rotation[2], parent, child], ) def generate_launch_description(): bike_robot = LaunchConfiguration('bike_robot', default='false') bike_robot_ = GroupAction( condition=IfCondition(bike_robot), actions=[ generate_robot_node('xuan_bike_robot.urdf','bike_robot'), generate_static_transform_publisher_node(['0.048', '0', '0.18'], ['0', '0', '0'], 'base_link', 'laser'), generate_static_transform_publisher_node(['0.048', '0', '0.18'], ['0', '0', '0'], 'base_link', 'camera_link'), ]) # Create the launch description and populate ld = LaunchDescription() # Set environment variables # Declare the launch options #ld.add_action(declare_use_composition_cmd) # Add the actions to launch all of the localiztion nodes ld.add_action(mini_mec_) ld.add_action(mini_akm_) ld.add_action(mini_tank_) ld.add_action(mini_4wd_) ld.add_action(mini_diff_) ld.add_action(brushless_senior_diff_) ld.add_action(bike_robot_) return ld xuan_bike_robot.urdf文件代码为: <?xml version="1.0"?> <robot name="wheeltec_robot"> <visual> <geometry> <mesh filename="file:///home/deepbot/ros-relate/wheeltec_ros2/src/wheeltec_robot_urdf/wheeltec_robot_urdf/meshes/Xuan-release.stl"scale="0.001 0.001 0.001"/> </geometry> <origin xyz="0 0 0" rpy="0 0 0"/> <material name="blue"> <color rgba="0 0 0.8 0.5"/> </material> </visual> </robot>

最新推荐

recommend-type

基于QT的调色板

【基于QT的调色板】是一个使用Qt框架开发的色彩选择工具,类似于Windows操作系统中常见的颜色选取器。Qt是一个跨平台的应用程序开发框架,广泛应用于桌面、移动和嵌入式设备,支持C++和QML语言。这个调色板功能提供了横竖两种渐变模式,用户可以方便地选取所需的颜色值。 在Qt中,调色板(QPalette)是一个关键的类,用于管理应用程序的视觉样式。QPalette包含了一系列的颜色角色,如背景色、前景色、文本色、高亮色等,这些颜色可以根据用户的系统设置或应用程序的需求进行定制。通过自定义QPalette,开发者可以创建具有独特视觉风格的应用程序。 该调色板功能可能使用了QColorDialog,这是一个标准的Qt对话框,允许用户选择颜色。QColorDialog提供了一种简单的方式来获取用户的颜色选择,通常包括一个调色板界面,用户可以通过滑动或点击来选择RGB、HSV或其他色彩模型中的颜色。 横渐变取色可能通过QGradient实现,QGradient允许开发者创建线性或径向的色彩渐变。线性渐变(QLinearGradient)沿直线从一个点到另一个点过渡颜色,而径向渐变(QRadialGradient)则以圆心为中心向外扩散颜色。在调色板中,用户可能可以通过滑动条或鼠标拖动来改变渐变的位置,从而选取不同位置的颜色。 竖渐变取色则可能是通过调整QGradient的方向来实现的,将原本水平的渐变方向改为垂直。这种设计可以提供另一种方式来探索颜色空间,使得选取颜色更为直观和便捷。 在【colorpanelhsb】这个文件名中,我们可以推测这是与HSB(色相、饱和度、亮度)色彩模型相关的代码或资源。HSB模型是另一种常见且直观的颜色表示方式,与RGB或CMYK模型不同,它以人的感知为基础,更容易理解。在这个调色板中,用户可能可以通过调整H、S、B三个参数来选取所需的颜色。 基于QT的调色板是一个利用Qt框架和其提供的色彩管理工具,如QPalette、QColorDialog、QGradient等,构建的交互式颜色选择组件。它不仅提供了横竖渐变的色彩选取方式,还可能支持HSB色彩模型,使得用户在开发图形用户界面时能更加灵活和精准地控制色彩。
recommend-type

美国国际航空交通数据分析报告(1990-2020)

根据给定的信息,我们可以从中提取和分析以下知识点: 1. 数据集概述: 该数据集名为“U.S. International Air Traffic data(1990-2020)”,记录了美国与国际间航空客运和货运的详细统计信息。数据集涵盖的时间范围从1990年至2020年,这说明它包含了长达30年的时间序列数据,对于进行长期趋势分析非常有价值。 2. 数据来源及意义: 此数据来源于《美国国际航空客运和货运统计报告》,该报告是美国运输部(USDOT)所管理的T-100计划的一部分。T-100计划旨在收集和发布美国和国际航空公司在美国机场的出入境交通报告,这表明数据的权威性和可靠性较高,适用于政府、企业和学术研究等领域。 3. 数据内容及应用: 数据集包含两个主要的CSV文件,分别是“International_Report_Departures.csv”和“International_Report_Passengers.csv”。 a. International_Report_Departures.csv文件可能包含了以下内容: - 离港航班信息:记录了各航空公司的航班号、起飞和到达时间、起飞和到达机场的代码以及国际地区等信息。 - 航空公司信息:可能包括航空公司代码、名称以及所属国家等。 - 飞机机型信息:如飞机类型、座位容量等,这有助于分析不同机型的使用频率和趋势。 - 航线信息:包括航线的起始和目的国家及城市,对于研究航线网络和优化航班计划具有参考价值。 这些数据可以用于航空交通流量分析、机场运营效率评估、航空市场分析等。 b. International_Report_Passengers.csv文件可能包含了以下内容: - 航班乘客信息:可能包括乘客的国籍、年龄、性别等信息。 - 航班类型:如全客机、全货机或混合型航班,可以分析乘客运输和货物运输的比例。 - 乘客数量:记录了各航班或航线的乘客数量,对于分析航空市场容量和增长趋势很有帮助。 - 飞行里程信息:有助于了解国际间不同航线的长度和飞行距离,为票价设置和燃油成本分析提供数据支持。 这些数据可以用于航空客运市场分析、需求预测、收益管理等方面。 4. 数据分析和应用实例: - 航空流量分析:通过分析离港航班数据,可以观察到哪些航线最为繁忙,哪些机场的国际航空流量最大,这有助于航空公司调整航班时刻表和运力分配。 - 市场研究:乘客数据可以揭示不同国家和地区之间的人口流动趋势,帮助航空公司和政府机构了解国际旅行市场的需求变化。 - 飞机利用率:结合飞机机型和飞行频率信息,可以对特定机型的使用率进行分析,评估飞机维护需求和燃油效率。 - 安全监管:通过对比不同航空公司和航班的安全记录,监管机构可以更有效地评估航空公司的安全性能,并采取必要的监管措施。 5. 技术和方法论: 分析此类数据通常涉及数据清洗、数据整合、统计分析、时间序列分析、预测建模等数据科学方法。使用Excel、SQL、R、Python等工具进行数据处理和分析是常见的做法。例如,可以使用Python的Pandas库来清洗和准备数据,使用Matplotlib和Seaborn库来可视化数据,然后利用Scikit-learn或Statsmodels库来构建预测模型。 通过以上知识点的提取和分析,我们可以理解到“U.S. International Air Traffic data(1990-2020)-数据集”的重要性,它不仅记录了跨越30年的航空交通数据,还为各种分析和应用提供了详实的基础信息。对于航空业从业者、政策制定者、研究人员以及数据分析师来说,这是一个极具价值的数据资源。
recommend-type

统计学视角:深入理解最小二乘法的概率论基础

# 1. 最小二乘法的基本概念 最小二乘法(Least Squares Method, LSM)是统计学和数据分析中广泛使用的一种数学优化技术。其主要目的是通过最小化误差的平方和来寻找数据的最佳函数匹配。这种方法最早由高斯提出,但在工程、物理、经济和其他领域中有着广泛的应用。 在本章中,我们将首先了
recommend-type

vscode中使用Codeium

<think>好的,我需要回答用户如何在VSCode中使用Codeium插件的问题。首先,我需要回顾用户提供的引用内容,看看是否有相关信息。用户提供的引用[1]提到了Codeium Chat可以解释代码中的问题,引用[2]提到了在IDEA中的一些问题,可能对VSCode的安装有帮助。用户还提到了安装步骤、基本功能、常见问题等。 首先,我应该分步骤说明安装过程,包括在VSCode扩展商店搜索Codeium并安装。然后,登录部分可能需要用户访问仪表板获取API密钥,引用[2]中提到登录问题,可能需要提醒用户注意网络或权限设置。 接下来是基本功能,比如代码自动补全和Chat功能。引用[1]提到C
recommend-type

UniMoCo:统一框架下的多监督视觉学习方法

在详细解析“unimoco”这个概念之前,我们需要明确几个关键点。首先,“unimoco”代表的是一种视觉表示学习方法,它在机器学习尤其是深度学习领域中扮演着重要角色。其次,文章作者通过这篇论文介绍了UniMoCo的全称,即“Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning”,其背后的含义是在于UniMoCo框架整合了无监督学习、半监督学习和全监督学习三种不同的学习策略。最后,该框架被官方用PyTorch库实现,并被提供给了研究者和开发者社区。 ### 1. 对比学习(Contrastive Learning) UniMoCo的概念根植于对比学习的思想,这是一种无监督学习的范式。对比学习的核心在于让模型学会区分不同的样本,通过将相似的样本拉近,将不相似的样本推远,从而学习到有效的数据表示。对比学习与传统的分类任务最大的不同在于不需要手动标注的标签来指导学习过程,取而代之的是从数据自身结构中挖掘信息。 ### 2. MoCo(Momentum Contrast) UniMoCo的实现基于MoCo框架,MoCo是一种基于队列(queue)的对比学习方法,它在训练过程中维持一个动态的队列,其中包含了成对的负样本。MoCo通过 Momentum Encoder(动量编码器)和一个队列来保持稳定和历史性的负样本信息,使得模型能够持续地进行对比学习,即使是在没有足够负样本的情况下。 ### 3. 无监督学习(Unsupervised Learning) 在无监督学习场景中,数据样本没有被标记任何类别或标签,算法需自行发现数据中的模式和结构。UniMoCo框架中,无监督学习的关键在于使用没有标签的数据进行训练,其目的是让模型学习到数据的基础特征表示,这对于那些标注资源稀缺的领域具有重要意义。 ### 4. 半监督学习(Semi-Supervised Learning) 半监督学习结合了无监督和有监督学习的优势,它使用少量的标注数据与大量的未标注数据进行训练。UniMoCo中实现半监督学习的方式,可能是通过将已标注的数据作为对比学习的一部分,以此来指导模型学习到更精准的特征表示。这对于那些拥有少量标注数据的场景尤为有用。 ### 5. 全监督学习(Full-Supervised Learning) 在全监督学习中,所有的训练样本都有相应的标签,这种学习方式的目的是让模型学习到映射关系,从输入到输出。在UniMoCo中,全监督学习用于训练阶段,让模型在有明确指示的学习目标下进行优化,学习到的任务相关的特征表示。这通常用于有充足标注数据的场景,比如图像分类任务。 ### 6. PyTorch PyTorch是一个开源机器学习库,由Facebook的人工智能研究团队开发,主要用于计算机视觉和自然语言处理等任务。它被广泛用于研究和生产环境,并且因其易用性、灵活性和动态计算图等特性受到研究人员的青睐。UniMoCo官方实现选择PyTorch作为开发平台,说明了其对科研社区的支持和对易于实现的重视。 ### 7. 可视化表示学习(Visual Representation Learning) 可视化表示学习的目的是从原始视觉数据中提取特征,并将它们转换为能够反映重要信息且更易于处理的形式。在UniMoCo中,无论是无监督、半监督还是全监督学习,最终的目标都是让模型学习到有效的视觉表示,这些表示可以用于下游任务,如图像分类、目标检测、图像分割等。 ### 8. 标签队列(Label Queue) UniMoCo通过标签队列维护受监管的标签,这可能意味着对于那些半监督或全监督学习的任务,模型在进行对比学习时,会参考这些来自标签队列的数据。标签队列机制能帮助模型更好地利用有限的标注数据,增强模型的泛化能力。 ### 结论 UniMoCo的提出,以及其官方PyTorch实现的发布,将对计算机视觉领域产生深远影响。它不仅提供了一个统一的对比学习框架,使得从无监督到全监督的学习过程更加灵活和高效,而且为研究者们提供了一个强力的工具,以便更好地探索和实现各种视觉任务。UniMoCo的研究和应用前景,为机器学习尤其是深度学习在视觉领域的研究和实践提供了新的视角和可能。
recommend-type

【MATLAB算法精讲】:最小二乘法的实现与案例深度分析

# 1. 最小二乘法的基本原理 最小二乘法是一种数学优化技术,它通过最小化误差的平方和来寻找数据的最佳函数匹配。其核心思想是选择一条曲线,使得所有观察点到这条曲线的距离之和最小。这种方法广泛应用于统计学、信号处理、工程学和经济学等领域,尤其适用于需要通过一组数据点来确定函数参数的情况。 ## 1.1 统计学视角下的最小二乘法 在统计学中,最小二乘法经常用于
recommend-type

Idea使用教程+jdk配置

<think>好的,我现在需要回答用户关于IntelliJ IDEA使用教程和JDK配置方法的问题。首先,用户明确提到了参考站内引用中的内容,引用[1]和引用[2]都涉及JDK的配置步骤,但看起来这些引用内容可能来自其他问题或文档,可能与当前问题不完全相关,但用户希望我利用这些信息来组织回答。 首先,我需要确认IntelliJ IDEA配置JDK的基本步骤,并整合用户提供的引用内容。引用[1]提到选择JDK安装根目录,例如D:\develop\Java\jdk-17,这说明配置时需要定位到JDK的主目录。引用[2]则提到了通过New按钮选择JDK版本,并完成项目创建,这部分可能涉及到项目设置
recommend-type

GitHub入门实践:审查拉取请求指南

从提供的文件信息中,我们可以抽取以下知识点: **GitHub入门与Pull Request(PR)的审查** **知识点1:GitHub简介** GitHub是一个基于Git的在线代码托管和版本控制平台,它允许开发者在互联网上进行代码的托管和协作。通过GitHub,用户可以跟踪和管理代码变更,参与开源项目,或者创建自己的私有仓库进行项目协作。GitHub为每个项目提供了问题跟踪和任务管理功能,支持Pull Request机制,以便用户之间可以进行代码的审查和讨论。 **知识点2:Pull Request的作用与审查** Pull Request(PR)是协作开发中的一个重要机制,它允许开发者向代码库贡献代码。当开发者在自己的分支上完成开发后,他们可以向主分支(或其他分支)提交一个PR,请求合入他们的更改。此时,其他开发者,包括项目的维护者,可以审查PR中的代码变更,进行讨论,并最终决定是否合并这些变更到目标分支。 **知识点3:审查Pull Request的步骤** 1. 访问GitHub仓库,并查看“Pull requests”标签下的PR列表。 2. 选择一个PR进行审查,点击进入查看详细内容。 3. 查看PR的标题、描述以及涉及的文件变更。 4. 浏览代码的具体差异,可以逐行审查,也可以查看代码变更的概览。 5. 在PR页面添加评论,可以针对整个PR,也可以针对特定的代码行或文件。 6. 当审查完成后,可以提交评论,或者批准、请求修改或关闭PR。 **知识点4:代码审查的最佳实践** 1. 确保PR的目标清晰且具有针对性,避免过于宽泛。 2. 在审查代码时,注意代码的质量、结构以及是否符合项目的编码规范。 3. 提供建设性的反馈,指出代码的优点和需要改进的地方。 4. 使用清晰、具体的语言,避免模糊和主观的评论。 5. 鼓励开发者间的协作,而不是单向的批评。 6. 经常审查PR,以避免延迟和工作积压。 **知识点5:HTML基础** HTML(HyperText Markup Language)是用于创建网页的标准标记语言。它通过各种标签(如`<p>`用于段落,`<img>`用于图片,`<a>`用于链接等)来定义网页的结构和内容。HTML文档由元素组成,这些元素通过开始标签和结束标签来标识。例如,`<p>This is a paragraph.</p>`。HTML的最新版本是HTML5,它引入了许多新的元素和API,增强了对多媒体、图形和本地存储的支持。 **知识点6:GitHub Pages功能介绍** GitHub Pages是一个静态站点托管服务,允许用户直接从GitHub仓库中发布个人、组织或项目的网站。你可以通过设置一个专门的分支来存放你的网站源代码,然后利用GitHub Pages的设置选项,选择分支并发布你的网站。发布的网站将可以通过一个自定义的URL访问,这个URL通常是`username.github.io/repo-name`的格式。这为开发者提供了一个简单而快速的方法来搭建个人或项目的展示页面。 **知识点7:简单的游戏开发实践** 文件描述中提到了一个基于项目的学习活动,通过游戏的形式,让入门人员体验操作和理解基本的游戏开发概念。在这个活动中,参与者通过键盘操作控制形状的旋转和移动,目标是创建无空隙的完整行,这涉及到游戏逻辑、用户输入处理和图形界面显示等基础知识。该活动可能使用了HTML、JavaScript和CSS等前端技术实现,参与者通过实践操作来学习如何编程,并理解基本的游戏设计原理。
recommend-type

【R语言高级教程】:最小二乘法从入门到精通

# 1. 最小二乘法的理论基础 最小二乘法是数学优化技术之一,广泛应用于统计学、数据分析和工程学等领域。其核心思想在于找到一条直线(或曲线),使得所有观测点到该直线的垂直距离之和最小,即误差平方和最小。这一方法不仅简单易行,而且可以有效减少异常值对模型的影响,使得参数估计更加稳定。 ## 1.1 最小二乘法的数学表述 在最基本的线性回归模型中,最小二乘法通过最小化误差的平方和来寻找最
recommend-type

cadence画PCB时改变线长

<think>好的,我现在需要帮助用户解决在Cadence PCB设计中调整走线长度的问题。首先,我得回想一下用户提到的引用内容,尤其是引用1、2和3,里面提到了实时显示走线长度、约束导向设计以及过孔对长度的影响。用户可能已经知道如何显示长度,现在想知道如何调整它们。 用户的问题是如何调整走线长度以满足特定需求,比如等长布线或时序要求。我应该先分步骤说明不同的调整方法,比如手动调整、使用自动优化工具、蛇形走线,以及设置约束管理器中的规则。 记得引用中的信息,比如引用2提到的约束导向环境和实时长度显示,所以需要提到约束管理器的使用。引用3讨论了过孔对长度的影响,调整过孔数量可能也是一种方法。