基于SUMO的强化学习换道决策

本文介绍基于SUMO与gym组合的强化学习demo,涵盖换道决策任务,包括场景描述、SUMO配置与测试。还构建基于gym的SUMO强化学习环境,设计动作、状态空间与奖励函数,采用PPO算法训练模型,训练后模型任务成功率达100%,代码可在码云获取。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

本文介绍基于SUMO与gym组合的强化学习demo。

1. 依赖包

本文主要采用的依赖包版本如下:

gym==0.21.0
stable-baselines3==1.8.0

SUMO 软件的版本为1.8.0

2. 换道决策任务

2.1. 场景描述

高速直行两车道,车辆A和车辆B沿着同一车道直行。车辆A为前车,车辆B为后车,前车A为慢车最高车速为10m/s,后车B为快车,最高车速为25m/s,如下图。决策任务:车辆B通过换道至左侧车道超过A车。
换道决策场景

2.2. SUMO配置环境

道路配置文件:JingZan.net.xml

<?xml version="1.0" encoding="UTF-8"?>

<!-- generated on 09/06/23 16:56:12 by Eclipse SUMO netedit Version 1.8.0
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sumo.dlr.de/xsd/netconvertConfiguration.xsd">

    <input>
        <sumo-net-file value="D:\code\Motion_Planning\SumoGym\sumo_config\JingZan.net.xml"/>
    </input>

    <output>
        <output-file value="D:\code\Motion_Planning\SumoGym\sumo_config\JingZan.net.xml"/>
    </output>

    <processing>
        <geometry.min-radius.fix.railways value="false"/>
        <geometry.max-grade.fix value="false"/>
        <offset.disable-normalization value="true"/>
        <lefthand value="false"/>
    </processing>

    <junctions>
        <no-turnarounds value="true"/>
        <junctions.corner-detail value="5"/>
        <junctions.limit-turn-speed value="5.5"/>
        <rectangular-lane-cut value="false"/>
    </junctions>

    <pedestrian>
        <walkingareas value="false"/>
    </pedestrian>

    <report>
        <aggregate-warnings value="5"/>
    </report>

</configuration>
-->

<net version="1.6" junctionCornerDetail="5" limitTurnSpeed="5.50" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sumo.dlr.de/xsd/net_file.xsd">

    <location netOffset="0.00,0.00" convBoundary="0.00,0.00,2000.00,0.00" origBoundary="-10000000000.00,-10000000000.00,10000000000.00,10000000000.00" projParameter="!"/>

    <edge id=":gneJ1_0" function="internal">
        <lane id=":gneJ1_0_0" index="0" speed="30.00" length="0.10" shape="1000.00,-4.80 1000.00,-4.80"/>
        <lane id=":gneJ1_0_1" index="1" speed="30.00" length="0.10" shape="1000.00,-1.60 1000.00,-1.60"/>
    </edge>

    <edge id="gneE0" from="gneJ0" to="gneJ1" priority="-1">
        <lane id="gneE0_0" index="0" speed="30.00" length="1000.00" shape="0.00,-4.80 1000.00,-4.80"/>
        <lane id="gneE0_1" index="1" speed="30.00" length="1000.00" shape="0.00,-1.60 1000.00,-1.60"/>
    </edge>
    <edge id="gneE1" from="gneJ1" to="gneJ2" priority="-1">
        <lane id="gneE1_0" index="0" speed="30.00" length="1000.00" shape="1000.00,-4.80 2000.00,-4.80"/>
        <lane id="gneE1_1" index="1" speed="30.00" length="1000.00" shape="1000.00,-1.60 2000.00,-1.60"/>
    </edge>

    <junction id="gneJ0" type="dead_end" x="0.00" y="0.00" incLanes="" intLanes="" shape="0.00,0.00 0.00,-6.40"/>
    <junction id="gneJ1" type="priority" x="1000.00" y="0.00" incLanes="gneE0_0 gneE0_1" intLanes=":gneJ1_0_0 :gneJ1_0_1" shape="1000.00,0.00 1000.00,-6.40 1000.00,0.00">
        <request index="0" response="00" foes="00" cont="0"/>
        <request index="1" response="00" foes="00" cont="0"/>
    </junction>
    <junction id="gneJ2" type="dead_end" x="2000.00" y="0.00" incLanes="gneE1_0 gneE1_1" intLanes="" shape="2000.00,-6.40 2000.00,0.00"/>

    <connection from="gneE0" to="gneE1" fromLane="0" toLane="0" via=":gneJ1_0_0" dir="s" state="M"/>
    <connection from="gneE0" to="gneE1" fromLane="1" toLane="1" via=":gneJ1_0_1" dir="s" state="M"/>

    <connection from=":gneJ1_0" to="gneE1" fromLane="0" toLane="0" dir="s" state="M"/>
    <connection from=":gneJ1_0" to="gneE1" fromLane="1" toLane="1" dir="s" state="M"/>

</net>

路线配置文件:my_route1.rou.xml

<routes>

   <vType id="CarA" accel="2.5" decel="4.5" emergencyDecel="9.0" length="5" color="1,1,0" vClass="passenger" maxSpeed="25" carFollowModel="IDM"/>
   <vType id="CarB" accel="2.5" decel="4.5" emergencyDecel="9.0" length="5" color="1,1,0" vClass="passenger" maxSpeed="10" carFollowModel="IDM"/>

   <route id="route1" edges="gneE0 gneE1" />

   <vehicle id="vehicle1" depart="0" departLane="0" arrivalLane="0" departPos="3" departSpeed="1" route="route1" type="CarB" color="1,0,0"/>
   <vehicle id="self_car" depart="5" departLane="0" arrivalLane="0" departPos="3" departSpeed="1" route="route1" type="CarA" color="1,1,0"/>

</routes>


SUMO配置文件:my_config_file.sumocfg

<configuration>
   <input>
     <net-file value="JingZan.net.xml"/>
     <route-files value="my_route1.rou.xml"/>
   </input>
   <time>
     <begin value="0"/>
     <end value="100"/>
   </time>
   <gui_only>
     <start value="t"/>
     <quit-on-end value="t"/>
   </gui_only>
   <processing>
		<lanechange.duration value="5"/>
	</processing>
</configuration>

2.3. SUMO环境测试

"""
@Author: Fhz
@Create Date: 2023/9/6 17:18
@File: sumoTest.py
@Description: 
@Modify Person Date: 
"""
import sys
import os
import time

if 'SUMO_HOME' in os.environ:
    tools = os.path.join(os.environ['SUMO_HOME'], 'tools')
    sys.path.append(tools)
else:
    sys.exit("Please declare environment variable 'SUMO_HOME'")

import traci
from sumolib import checkBinary


if_show_gui = True

if not if_show_gui:
    sumoBinary = checkBinary('sumo')
else:
    sumoBinary = checkBinary('sumo-gui')

sumocfgfile = "sumo_config/my_config_file.sumocfg"
traci.start([sumoBinary, "-c", sumocfgfile])

SIM_STEPS = [1, 100]
beginTime = SIM_STEPS[0]
duration = SIM_STEPS[1]

time.sleep(2)

egoID = "self_car"

for step in range(duration):
    traci.simulationStep(step)
    traci.vehicle.changeLane("self_car", "{}".format(0), 5)


测试视频如下:

sumo_test

3. 基于gym的SUMO强化学习环境

自定义gym强化学习环境的建立见博客:基于自定义gym环境的强化学习

3.1 强化学习环境设计

动作空间:离散动作{0:车道保持;1:左换道},底层采用IDM跟车模型与SUMO默认换道模型。
状态空间:车辆B和车辆A的状态{x, y, speed}。
奖励函数: 奖励函数由时间奖励、目标车道奖励、期望速度奖励和安全奖励组成。
终止条件:{1:完成超车;2:任务超时;3:发生碰撞}

"""
@Author: Fhz
@Create Date: 2023/9/6 19:52
@File: sumoGym.py
@Description: 
@Modify Person Date: 
"""
import os
import sys

# check if SUMO_HOME exists in environment variable
# if not, then need to declare the variable before proceeding
# makes it OS-agnostic
if "SUMO_HOME" in os.environ:
    tools = os.path.join(os.environ["SUMO_HOME"], "tools")
    sys.path.append(tools)
else:
    sys.exit("please declare environment variable 'SUMO_HOME'")

import gym
from gym import spaces
import numpy as np
import traci
from sumolib import checkBinary
from stable_baselines3 import PPO
import time
import argparse


class SumoGym(gym.Env):
    def __init__(self, args):
        self.current_state = None
        self.is_success = False
        self.show_gui = args.show_gui
        self.sumocfgfile = args.sumocfgfile
        self.egoID = args.egoID
        self.frontID = args.frontID
        self.start_time = args.start_time
        self.collision = args.collision
        self.sleep = args.sleep
        self.num_action = args.num_action
        self.lane_change_time = args.lane_change_time

        # Road config
        self.min_x_position = args.min_x_position
        self.max_x_position = args.max_x_position
        self.min_y_position = args.min_y_position
        self.max_y_position = args.max_y_position
        self.min_speed = args.min_speed
        self.max_speed = args.max_speed

        # Reward config
        self.w_time = args.w_time
        self.w_lane = args.w_lane
        self.w_speed = args.w_speed
        self.R_time = args.R_time
        self.P_lane = args.P_lane
        self.V_desired = args.V_desired
        self.R_collision = args.R_collision

        # Done config
        self.target_lane_id = args.target_lane_id
        self.max_count = args.max_count

        self.low = np.array([
            # ego vehicle
            self.min_x_position,
            self.min_y_position,
            self.min_speed,

            # ego leader
            self.min_x_position,
            self.min_y_position,
            self.min_speed,

        ], dtype=np.float32)

        self.high = np.array([
            # ego vehicle
            self.max_x_position,
            self.max_y_position,
            self.max_speed,

            # ego_leader
            self.max_x_position,
            self.max_y_position,
            self.max_speed,

        ], dtype=np.float32)

        self.action_space = spaces.Discrete(self.num_action)
        self.observation_space = spaces.Box(self.low, self.high, dtype=np.float32)

    def step(self, action):
        assert self.action_space.contains(action), "%r (%s) invalid" % (
            action,
            type(action),
        )
        if self.is_success:
            self.is_success = False

        if self.collision:
            self.collision = False

        self.count = self.count + 1
        traci.simulationStep(self.count)

        egoLane = traci.vehicle.getLaneIndex(self.egoID)

        if action == 0:
            traci.vehicle.changeLane(self.egoID, "{}".format(0), self.lane_change_time) # lane keeping
        else:
            traci.vehicle.changeLane(self.egoID, "{}".format(1), self.lane_change_time) # lane change

        self.current_state = self.getCurrentStates()

        Collision_Nums = traci.simulation.getCollidingVehiclesNumber()

        if Collision_Nums:
            print("collision num:{}".format(Collision_Nums))
            self.collision = True

        done = self.getDoneState(egoLane)
        reward = self.getRewards()
        info = {}

        if done:
            traci.close()

        return self.current_state, reward, done, info


    def render(self):
        self.show_gui = True
        pass


    def reset(self):

        self.collision = False
        if self.show_gui:
            sumoBinary = checkBinary('sumo-gui')
        else:
            sumoBinary = checkBinary('sumo')

        traci.start([sumoBinary, "-c", self.sumocfgfile])

        if self.sleep:
            time.sleep(2)

        for step in range(self.start_time):
            self.count = step + 1
            traci.simulationStep(self.count)
            if self.count >= 5:
                traci.vehicle.changeLane(self.egoID, "{}".format(0), self.lane_change_time)

        return self.getCurrentStates()


    def getVehicleStateViaId(self, vehicle_ID):
        """
        vehicle_ID: vehicle ID
        function: Get the Vehicle's state via vehicle ID
        """
        curr_pos = traci.vehicle.getPosition(vehicle_ID)
        y_ego, x_ego = curr_pos[0], curr_pos[1]
        speed = traci.vehicle.getSpeed(vehicle_ID)

        return x_ego, y_ego, speed


    def getCurrentStates(self):
        """
        function: Get all the states of vehicles, observation space.
        """

        x_ego, y_ego, speed_ego = self.getVehicleStateViaId(self.egoID)
        x_front, y_front, speed_front = self.getVehicleStateViaId(self.frontID)

        self.current_state = [x_ego, y_ego, speed_ego, x_front, y_front, speed_front]

        return self.current_state


    def getRewards(self):
        """
        function: get the reward after action.
        """

        # Efficient reward
        R_time = - self.R_time
        R_lane = -abs(self.current_state[0] - self.P_lane)
        R_speed = -abs(self.current_state[2] - self.V_desired)

        R_eff = self.w_time * R_time + self.w_lane * R_lane + self.w_speed * R_speed

        # Safety Reward
        if self.collision:
            R_safe = self.R_collision
        else:
            R_safe = 0

        R_eff = max(-30, R_eff)
        R_safe = max(-500, R_safe)

        Rewards = R_eff + R_safe

        return Rewards

    def getDoneState(self, ego_lane):
        """
        function: get the done state of simulation.
        """
        done = False

        if ego_lane == self.target_lane_id and self.current_state[1] >= self.current_state[4] + 30:
            done = True
            print("Mission success!")
            self.is_success = True
            return done

        if self.count >= self.max_count:
            done = True
            print("Exceeding maximum time!")
            return done

        if self.collision:
            done = True
            print("Collision occurs!")
            return done

        return done


def get_args():
    parser = argparse.ArgumentParser()
    # SUMO config
    parser.add_argument("--show_gui", type=bool, default=False, help="The flag of show SUMO gui.")
    parser.add_argument("--sumocfgfile", type=str, default="sumo_config/my_config_file.sumocfg", help="The path of the SUMO configure file.")
    parser.add_argument("--egoID", type=str, default="self_car", help="The ID of ego vehicle.")
    parser.add_argument("--frontID", type=str, default="vehicle1", help="The ID of front vehicle.")
    parser.add_argument("--start_time", type=int, default=10, help="The simulation step before learning.")
    parser.add_argument("--num_action", type=int, default=2, help="The number of action space.")
    parser.add_argument("--lane_change_time", type=float, default=5, help="The time of lane change.")

    # Road config
    parser.add_argument("--min_x_position", type=float, default=0.0, help="The minimum lateral position of vehicle.")
    parser.add_argument("--max_x_position", type=float, default=6.4, help="The maximum lateral position of vehicle.")
    parser.add_argument("--min_y_position", type=float, default=0.0, help="The minimum longitudinal position of vehicle.")
    parser.add_argument("--max_y_position", type=float, default=2000.0, help="The maximum longitudinal position of vehicle.")
    parser.add_argument("--min_speed", type=float, default=0.0, help="The minimum longitudinal speed of vehicle.")
    parser.add_argument("--max_speed", type=float, default=25.0, help="The maximum longitudinal speed of vehicle.")
    parser.add_argument("--count", type=int, default=0, help="The length of a training episode.")
    parser.add_argument("--collision", type=bool, default=False, help="The flag of collision of ego vehicle.")
    parser.add_argument("--sleep", type=bool, default=True, help="The flag of sleep of each simulation.")

    # Reward config
    parser.add_argument("--w_time", type=float, default=0.1, help="The weight of time consuming reward.")
    parser.add_argument("--w_lane", type=float, default=2, help="The weight of target lane reward.")
    parser.add_argument("--w_speed", type=float, default=0.1, help="The weight of desired speed reward.")
    parser.add_argument("--R_time", type=float, default=-1, help="The reward of time consuming.")
    parser.add_argument("--P_lane", type=float, default=-1.6, help="The lateral position of target lane.")
    parser.add_argument("--V_desired", type=float, default=20.0, help="The desired speed.")
    parser.add_argument("--R_collision", type=float, default=-400, help="The reward of ego vehicle collision.")

    # Done config
    parser.add_argument("--target_lane_id", type=int, default=1, help="The ID of target lane.")
    parser.add_argument("--max_count", type=int, default=25, help="The maximum length of a training episode.")

    args = parser.parse_args()

    return args

环境测试:

if __name__ == '__main__':
    args = get_args()
    env = SumoGym(args)

    ACTIONS_ALL = {
        0: 'Lane keeping',
        1: 'Lane change',
    }
    
    eposides = 10

    for eq in range(eposides):
        print("Test eposide: {}".format(eq))
        obs = env.reset()
        done = False
        rewards = 0
        counts = 0
        while not done:
            counts += 1
            time.sleep(0.01)
            action = env.action_space.sample()
            # print("The action is: {}".format(ACTIONS_ALL[action]))
            obs, reward, done, info = env.step(action)
            # env.render()
            rewards += reward
        print("The rewards is: {},  the counts is: {}".format(rewards, counts))

测试结果如下:

Test eposide: 0
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.176627402856454,  the counts is: 11
Test eposide: 1
Loading configuration ... done.
Exceeding maximum time!
Step #25.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -55.62561542130102,  the counts is: 15
Test eposide: 2
Loading configuration ... done.
Exceeding maximum time!
Step #25.00 (0ms ?*RT. ?UPS, TraCI: 1ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -51.78561542130102,  the counts is: 15
Test eposide: 3
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -23.61662740285646,  the counts is: 11
Test eposide: 4
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.176627402856454,  the counts is: 11
Test eposide: 5
Loading configuration ... done.
Exceeding maximum time!
Step #25.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -56.31834670955517,  the counts is: 15
Test eposide: 6
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 1ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -22.336627402856458,  the counts is: 11
Test eposide: 7
Loading configuration ... done.
Mission success!
Step #24.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -43.87270032448571,  the counts is: 14
Test eposide: 8
Loading configuration ... done.
Exceeding maximum time!
Step #25.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -51.78561542130102,  the counts is: 15
Test eposide: 9
Loading configuration ... done.
Exceeding maximum time!
Step #25.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -55.91591460997491,  the counts is: 15

由测试结果可知,任务成功率为:5/10 = 0.5。

3.2. 基于PPO的强化学习

采用的强化学习算法为PPO,模型训练:

if __name__ == '__main__':
    args = get_args()
    env = SumoGym(args)

    model = PPO('MlpPolicy', env,
                policy_kwargs=dict(net_arch=[64, 64]),
                learning_rate=5e-4,
                batch_size=32,
                gamma=0.99,
                verbose=1,
                tensorboard_log="logs/",
                )

    model.learn(int(2e4))
    model.save("models/ppo1")

训练20000步,大概2小时,训练曲线如下:
平均步数曲线
平均返回值曲线由图可知,随着训练步数的增加,平均步数逐渐下降,平均返回值逐渐上升。(未达最优,有兴趣自行训练)

模型测试:

if __name__ == '__main__':
    args = get_args()
    env = SumoGym(args)

    ACTIONS_ALL = {
        0: 'Lane keeping',
        1: 'Lane change',
    }

    # load model
    model = PPO.load("models/ppo1", env=env)

    eposides = 10

    for eq in range(eposides):
        print("Test eposide: {}".format(eq))
        obs = env.reset()
        done = False
        rewards = 0
        counts = 0
        while not done:
            counts += 1
            time.sleep(0.01)
            action, _state = model.predict(obs, deterministic=True)
            action = action.item()
            obs, reward, done, info = env.step(action)
            # env.render()
            rewards += reward
        print("The rewards is: {},  the counts is: {}".format(rewards, counts))

测试结果:

Wrapping the env with a `Monitor` wrapper
Wrapping the env in a DummyVecEnv.
Test eposide: 0
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 1
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 2
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 3
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 1ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 4
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 5
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 6
Loading configuration ... done.
Mission success!
Step #21.00 (1ms ~= 1000.00*RT, ~2000.00UPS, TraCI: 1ms, vehicles TOT 2 ACT 2 BUF 0)      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 7
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 8
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11
Test eposide: 9
Loading configuration ... done.
Mission success!
Step #21.00 (0ms ?*RT. ?UPS, TraCI: 2ms, vehicles TOT 2 ACT 2 BUF 0)                      
The rewards is: -26.506993450624154,  the counts is: 11

可看出训练后的模型,均能完成任务,成功率10/10=1.0。
测试视频:

sumo_test1

4. 后记

以上为基于SUMO换道决策的小例子,本文的重点是sumo+gym+SB3的技术路线来实现强化学习的换道决策,其中动作空间、状态空间和奖励函数的设计较为简单,有较大优化空间,有兴趣者可进一步改进。
本文所有代码见码云:sumo-gym-lane-change
有啥问题,评论区见。

### Sumo 模拟器中的车辆功能 在SUMO (Simulation of Urban MObility) 中,车变更行为可以通过多种方式进行控制和自定义。默认情况下,SUMO 的交通流模型已经包含了基本的自动车逻辑,该逻辑考虑到了诸如超车需求、前方障碍物避让等因素[^3]。 对于更精细的控制,用户能够利用特定参数来调整或完全重写这些规则: - **通过配置文件设置全局车变更策略** 这种方法适用于希望对整个仿真环境中所有车辆应用统一车变更加权方案的情况。可以在`<configuration>`标签下的`<lanechange.duration>`, `<lcStrategic>`,`<lcCooperative>`, `<lcSpeedGain>`, 和其他相关属性中指定数值以影响整体车变化倾向[^1]。 - **针对单个车辆定制化车变更行为** 若需为个别车辆设定独特的车特性,则可通过TraCI接口实现动态修改。例如,在Python脚本里调用 `traci.vehicle.setLaneChangeMode()` 函数可即时改变某辆车的行为模式;同样地,也可以使用 `setTargetLaneIndex()` 方法指派目标车给定车辆实例[^2]。 #### Python代码示例:更改特定车辆的车变更模式 ```python import traci # 假设已连接至正在运行的SUMO实例并初始化了traci库 target_vehicle_id = 'ego_vehicle' new_lane_change_mode = 0b000000000000 # 设置成不允许任何类型的主动车变更 traci.vehicle.setLaneChangeMode(target_vehicle_id, new_lane_change_mode) print(f"Vehicle {target_vehicle_id} lane change mode updated.") ``` 此段程序展示了如何禁用车辆`ego_vehicle`的一切自主车变动尝试。实际应用场景下可根据具体业务逻辑灵活调整二进制位串所代表的意义,从而达到预期效果。
评论 41
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值