[ML] 小白素人第一个pytorch实战项目其1:一个基于rosbag信息处理,使用机器学习之神经网络方法估算yawrate

Update info: 1213

当前效果总结推理模型效果以及两个大问题修改:BN换LN;特征选取不当

Update info: 09 28

  • 修改了数据读取bug
  • 添加了inference model

Update info: 10 20

  • test 数据集合改使用同车的另一组数据

Update info: 10 23

  • 加入学习率下降
  • 加入权重初始化代码
  • 重新训练了模型,根据单例测试结果,并且调大和调小相关feature结果变化符合应有的变化趋势,准确度达到了一定水平但还可以进一步提升

Update info: 10 27

  • 之前没有gpu 所以无所谓都在默认设备上,目前加入:
    X = X.to(device)  # 将数据也移到设备上
	y = y.to(device) 
# create LR schedule
    scheduler = StepLR(optimizer, step_size=100, gamma=0.9)  # 设置学习率每x个 epoch 减小为原来的0.x

以上为动态学习率

    def initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Linear):
                nn.init.xavier_uniform_(m.weight)
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)

解释如下:
第一行 for m in self.modules(): 遍历了模型中的所有模块(layers)。

if isinstance(m, nn.Linear): 这一行检查当前模块是否是线性层

nn.init.xavier_uniform_(m.weight) Xavier 初始化。它根据某个特定分布(在这里是均匀分布)来初始化权重,以使得每层的输出方差保持相对稳定,从而有助于更稳定地训练神经网络。

if m.bias is not None: 这一行检查当前模块是否有偏置(bias)项。如果存在偏置项,就将其初始化为零。

背景

作为闭环仿真系统的一环使用,由于基于运动学从方向盘转角估算车辆yawrate的方式,存在一定的误差:转向比非线性,执行器响应延迟,车辆尺寸、运动学模型自身误差,车辆惯性等;

步骤

  1. 实车录制rosbag
  2. 选取特征
  3. 回放rosbag,打印特征值,对齐频率处理
  4. 选用了pytorch构建简单的神经网络,(考虑到车辆状态是有时序关系的尝试更换rnn transformer)
    • 使用打印的信息,建立自定义的dataset,这里选取了方向盘转角,方向盘转角速率,yawrate,方向盘转速,横向加速度,纵向加速度,车速为特征
    • 选取下一个cycle的方向盘转角,yawrate,方向盘转速,横向加速度,纵向加速度,车速,方向盘转角速率作为目标参考
  5. 保存模型

相关代码

摘取bag数据

创建ros节点对齐打印我所需要的数据,重定向在txt文档里,消息是自己的ros msg编译的,小伙伴在数据方面可以自己造一下,最后面也贴了一些试车数据,造数据可以用多项式自己把比如方向盘转角yawrate等量关联起来,比如再用正弦函数模拟其中一个量为输入,得到其他量。把结果分为训练数据和测试数据。

import rospy
#adding libs according to your needs

from noa_msgs.msg import  Trajectory_to_Control,ESAInfo,NOHCtrlInfo


global timeLock
timeLock=1   

# passing by the data with ros callback method 
# def callback(msg):
#     global timeLock
#     print(f"msg.PlanningStatus: {msg.PlanningStatus}")

def callback4(msg):
    global timeLock
    if timeLock==1:
        print(f"msg.egoEgoStatus.yawRate: {msg.egoEgoStatus.yawRate}")
        print(f"msg.egoEgoStatus.linearSpeed: {msg.egoEgoStatus.linearSpeed}")
        print(f"msg.egoEgoStatus.accerationX: {msg.egoEgoStatus.accerationX}")
        print(f"msg.egoEgoStatus.accerationY: {msg.egoEgoStatus.accerationY}")
        print(f"msg.egoEgoStatus.steerWheelAngle: {msg.egoEgoStatus.steerWheelAngle}")
        print(f"msg.egoEgoStatus.steerWheelAngleRate: {msg.egoEgoStatus.steerWheelAngleRate}")
        print(f"msg.egoEgoStatus.frontWheelAngle: {msg.egoEgoStatus.frontWheelAngle}")
        timeLock=2
 

def callback7(msg):
    global timeLock
    if timeLock==2:
        print(f"msg.nohCtrlOutput.targetStrAngle: {msg.nohCtrlOutput.targetStrAngle}")
        print(f"msg.nohCtrlOutput.targetAcceleration: {msg.nohCtrlOutput.targetAcceleration}")
        timeLock=1

def start_ros():
    rospy.init_node('printPY')
    #rospy.Subscriber('/planning/Trajectory_toCtrl', Trajectory_to_Control, callback,queue_size=10)
    #rospy.Subscriber('/udp2ros/fusion_lanelines', FusionLaneMarker, callback2,queue_size=10)
    #rospy.Subscriber('/udp2ros/PredictionInfo', PredictionObstacles, callback3,queue_size=10)
    rospy.Subscriber('/udp2ros/ESAInfo', ESAInfo, callback4,queue_size=10)
    #rospy.Subscriber('/ppcontroller/PPOUT',PPOUT,callback5,queue_size=10)
    #rospy.Subscriber('replaytrace/NewTrace',NewTrace,callback6,queue_size=10)
    rospy.Subscriber('/udp2ros/NOHCtrlInfo',NOHCtrlInfo,callback7,queue_size=10)
    rospy.spin()
 

if __name__ == "__main__":

    start_ros()

主代码:创建数据集、定义网络、循环训练、输出结果

其中特征为t0时刻数据,label为t+1时刻数据 部分数据;示例见结尾

  • 根据数据需求自定义以下三个方法:
class CustomDataset(Dataset):
	def __init__(self, file_path):
	def __len__(self):
	def __getitem__(self, idx):
  • 定义神经网络
class NeuralNetwork(nn.Module):

完整的代码使用了Sequential,如果像下面分开写也是可以的

    super(NeuralNetwork, self).__init__()
        self.linear1 = nn.Linear(9, 64)
        self.relu1 = nn.ReLU()
        self.linear2 = nn.Linear(64, 64)
        self.relu2 = nn.ReLU()
        self.linear3 = nn.Linear(64, 7)

    def forward(self, x):
        out = self.linear1(x)
        out = self.relu1(out)
        out = self.linear2(out)
        out = self.relu2(out)
        out = self.linear3(out)
        return out
  • 定义超参数:学习率、一次灌入数据大小 batch_size 和训练循环:
learning_rate = 1e-3
batch_size = 16
epochs = 10
  • 使用均方差损失函数、使用随机梯度下降
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
  • 定义训练循环和测试循环函数
def train_loop(dataloader, model, loss_fn, optimizer):
def test_loop(dataloader, model, loss_fn):

全代码

import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader


class CustomDataset(Dataset):
    def __init__(self, file_path):
        self.samples = []  

        with open(file_path, 'r') as file:
            lines = file.readlines()

        data = {}
        for line in lines:

            #items = line.strip().split('\n')
            key, value = line.split(': ')
            data[key] = float(value)
            if (key=="msg.nohCtrlOutput.targetAcceleration"):
                self.samples.append(data.copy())
                data.clear()

    def __len__(self):
        return len(self.samples)-1# -1 is special process for my case

    def __getitem__(self, idx):
        sample = self.samples[idx]
        

        features = torch.tensor([
        sample.get('msg.egoEgoStatus.yawRate', 0.0),  # set default value, in some case the dataset could be not printed correctly
        sample.get('msg.egoEgoStatus.linearSpeed', 0.0),
        sample.get('msg.egoEgoStatus.accerationX', 0.0),
        sample.get('msg.egoEgoStatus.accerationY', 0.0),
        sample.get('msg.egoEgoStatus.steerWheelAngle', 0.0),
        sample.get('msg.egoEgoStatus.steerWheelAngleRate', 0.0),
        sample.get('msg.egoEgoStatus.frontWheelAngle', 0.0),
        sample.get('msg.nohCtrlOutput.targetStrAngle', 0.0),
        sample.get('msg.nohCtrlOutput.targetAcceleration', 0.0)
        ])
        sample2 = self.samples[idx+1]# +1 is special process for my case
        labels=torch.tensor([
        sample2.get('msg.egoEgoStatus.yawRate', 0.0),  
        sample2.get('msg.egoEgoStatus.linearSpeed', 0.0),
        sample2.get('msg.egoEgoStatus.accerationX', 0.0),
        sample2.get('msg.egoEgoStatus.accerationY', 0.0),
        sample2.get('msg.egoEgoStatus.steerWheelAngle', 0.0),
        sample2.get('msg.egoEgoStatus.steerWheelAngleRate', 0.0),
        sample2.get('msg.egoEgoStatus.frontWheelAngle', 0.0)
        ])

        return features,labels


# 定义神经网络
class NeuralNetwork(nn.Module):
    def __init__(self):
        super().__init__()
        #self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(9, 64),
            nn.ReLU(),
            nn.Linear(64, 64),
            nn.ReLU(),
            nn.Linear(64, 7)
        )

    def forward(self, x):
        #x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

def train_loop(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    # Set the model to training mode
    model.train()
    for batch, (X, y) in enumerate(dataloader):
        # Compute prediction and loss
        pred = model(X)
        loss = loss_fn(pred, y)
        # Backpropagation
        loss.backward()
        #refresh para
        optimizer.step()
        # clear grad
        optimizer.zero_grad()
        if batch % 50 == 0:# set print frequenz
            loss, current = loss.item(), (batch + 1) * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")


def test_loop(dataloader, model, loss_fn):
    # Set the model to evaluation mode
    model.eval()
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    test_loss, correct = 0, 0
    with torch.no_grad():
        for X, y in dataloader:
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            #correct += (pred.argmax(1) == y).type(torch.float).sum().item()

    test_loss /= num_batches
    print(f"Avg loss: {test_loss:>8f} \n")



# main process        
if __name__ == "__main__":
    # 数据集
    data_file_l = 'dataCol2.txt' 
    data_file_t = 'dataCol.txt' 
    custom_dataset_training = CustomDataset(data_file_l)
    custom_dataset_valid = CustomDataset(data_file_t) 

    # para
    learning_rate = 1e-3
    batch_size = 16
    epochs = 350
    # 数据加载器
    data_loader_training = DataLoader(custom_dataset_training, batch_size=batch_size, shuffle=True,drop_last=True)
    data_loader_valid = DataLoader(custom_dataset_valid, batch_size=batch_size,shuffle=True, drop_last=True)
    # dataset check
    for idx, item in enumerate(data_loader_training):
        print('idx:', idx)
        data, label = item
        print('data:', data)
        print('label:', label)
    for data, label in data_loader_training:
        print(f"Shape of data: {data.shape}")
        print(f"Shape of label: {label.shape} {label.dtype}")
        break
    # check if there are hardwares which contributes to quick computing 
    device = (
        "cuda"
        if torch.cuda.is_available()
        else "mps"
        if torch.backends.mps.is_available()
        else "cpu"
    )
    model = NeuralNetwork().to(device)
    #model info debug
    # print(f"Using {device} device")
    # print(f"Model structure: {model}\n\n")
    # for name, param in model.named_parameters():
    #     print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n")
    # create loss function && optimizer 
    loss_fn = nn.MSELoss()
    optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
    for i in range(epochs):
        print(f'_________________Epoch:{i+1}/{epochs}_______________________')
        train_loop(data_loader_training,model,loss_fn,optimizer)
        test_loop(data_loader_training,model,loss_fn)
    # saving model 
    torch.save(model.state_dict(), 'VehicalStateperML.pth')

10月23日 更新版全代码 留RNN接口

import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
from rnnForYawrate import MyRNN
from torch.optim.lr_scheduler import StepLR # add LR Schedule  20 oct.



class CustomDataset(Dataset):
    def __init__(self, file_path):
        self.samples = []  

        with open(file_path, 'r') as file:
            lines = file.readlines()

        data = {}
        for line in lines:

            #items = line.strip().split('\n')
            key, value = line.split(': ')
            data[key] = float(value)
            if (key=="msg.nohCtrlOutput.targetAcceleration"):
                self.samples.append(data.copy())
                data.clear()

    def __len__(self):
        return len(self.samples)-1# -1 is special process for my case

    def __getitem__(self, idx):
        sample = self.samples[idx]
        

        features = torch.tensor([
        sample.get('msg.egoEgoStatus.yawRate', 0.0),  # set default value, in some case the dataset could be not printed correctly
        sample.get('msg.egoEgoStatus.linearSpeed', 0.0),
        sample.get('msg.egoEgoStatus.accerationX', 0.0),
        sample.get('msg.egoEgoStatus.accerationY', 0.0),
        sample.get('msg.egoEgoStatus.steerWheelAngle', 0.0),
        sample.get('msg.egoEgoStatus.steerWheelAngleRate', 0.0),
        sample.get('msg.egoEgoStatus.frontWheelAngle', 0.0),
        sample.get('msg.nohCtrlOutput.targetStrAngle', 0.0),
        sample.get('msg.nohCtrlOutput.targetAcceleration', 0.0)
        ])
        sample2 = self.samples[idx+1]# +1 is special process for my case
        labels=torch.tensor([
        sample2.get('msg.egoEgoStatus.yawRate', 0.0),  
        sample2.get('msg.egoEgoStatus.linearSpeed', 0.0),
        sample2.get('msg.egoEgoStatus.accerationX', 0.0),
        sample2.get('msg.egoEgoStatus.accerationY', 0.0),
        sample2.get('msg.egoEgoStatus.steerWheelAngle', 0.0),
        sample2.get('msg.egoEgoStatus.steerWheelAngleRate', 0.0),
        sample2.get('msg.egoEgoStatus.frontWheelAngle', 0.0)
        ])

        return features,labels


# 定义神经网络
class NeuralNetwork(nn.Module):
    def __init__(self):
        super().__init__()
        #self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(9, 64),
            nn.ReLU(),
            nn.Linear(64, 64),
            nn.ReLU(),
            nn.Linear(64, 7)
        )
    def initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Linear):
                nn.init.xavier_uniform_(m.weight)
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)

    def forward(self, x):
        #x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

def train_loop(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    # Set the model to training mode - important for batch normalization and dropout layers
    model.train()
    for batch, (X, y) in enumerate(dataloader):
        # Compute prediction and loss
         X = X.to(device)  # 将数据移到设备上
    	y = y.to(device)  
        pred = model(X)
        loss = loss_fn(pred, y)

        # Backpropagation
        loss.backward()
        #refresh para
        optimizer.step()
        # clear grad
        optimizer.zero_grad()

        if batch % 50 == 0:# set print frequenz
            loss, current = loss.item(), (batch + 1) * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")


def test_loop(dataloader, model, loss_fn):
    # Set the model to evaluation mode - important for batch normalization and dropout layers
    model.eval()
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    test_loss, correct = 0, 0

    # Evaluating the model with torch.no_grad() ensures that no gradients are computed during test mode
    # also serves to reduce unnecessary gradient computations and memory usage for tensors with requires_grad=True
    with torch.no_grad():
        for X, y in dataloader:
             X = X.to(device)  # 将数据移到设备上
    		y = y.to(device) 
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            #correct += (pred.argmax(1) == y).type(torch.float).sum().item()

    test_loss /= num_batches
    print(f"Avg loss: {test_loss:>8f} \n")



# main process        
if __name__ == "__main__":
    # 数据集
    data_file_learning = 'training.txt' 
    data_file_t = 'test.txt' 
    custom_dataset_training = CustomDataset(data_file_learning)
    custom_dataset_valid = CustomDataset(data_file_t) # wait to use another data_file

    # para
    learning_rate = 1e-3
    batch_size = 16
    epochs = 5000
    # 数据加载器
    data_loader_training = DataLoader(custom_dataset_training, batch_size=batch_size, shuffle=True,drop_last=True)
    data_loader_valid = DataLoader(custom_dataset_valid, batch_size=batch_size,shuffle=True, drop_last=True)
    # dataset check
    for idx, item in enumerate(data_loader_training):
        print('idx:', idx)
        data, label = item
        print('data:', data)
        print('label:', label)
    for data, label in data_loader_training:
        print(f"Shape of data: {data.shape}")
        print(f"Shape of label: {label.shape} {label.dtype}")
        break
    # check if there are hardwares which contributes to quick computing 
    device = (
        "cuda"
        if torch.cuda.is_available()
        else "mps"
        if torch.backends.mps.is_available()
        else "cpu"
    )
    
    if 1:
        model = NeuralNetwork().to(device)
    else:
        model =MyRNN().to(device) 
    #model info debug
    # print(f"Using {device} device")
    # print(f"Model structure: {model}\n\n")
    # for name, param in model.named_parameters():
    #     print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n")

    # create loss function && optimizer 
    loss_fn = nn.MSELoss()
    if 1:
        optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
    else:
        optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
    # create LR schedule
    scheduler = StepLR(optimizer, step_size=100, gamma=0.9)  # 设置学习率每x个 epoch 减小为原来的0.x
    # running loop
    for i in range(epochs):
        print(f'_________________Epoch:{i+1}/{epochs}_______________________')
        train_loop(data_loader_training,model,loss_fn,optimizer)
        scheduler.step()
        test_loop(data_loader_training,model,loss_fn)
    # saving model 
    torch.save(model.state_dict(), 'VehicalStateperML.pth')

训练了5000次的结果

loss: 0.000820  [30416/33964]
loss: 0.000359  [31216/33964]
loss: 0.000827  [32016/33964]
loss: 0.000901  [32816/33964]
loss: 0.000991  [33616/33964]
Avg loss: 0.001174 

_________________Epoch:4999/5000_______________________
loss: 0.000810  [   16/33964]
loss: 0.001203  [  816/33964]
loss: 0.000573  [ 1616/33964]
loss: 0.000904  [ 2416/33964]
loss: 0.000848  [ 3216/33964]
loss: 0.000672  [ 4016/33964]
loss: 0.000593  [ 4816/33964]
loss: 0.001202  [ 5616/33964]
loss: 0.001011  [ 6416/33964]
loss: 0.000489  [ 7216/33964]
loss: 0.001305  [ 8016/33964]
loss: 0.001012  [ 8816/33964]
loss: 0.000933  [ 9616/33964]
loss: 0.001469  [10416/33964]
loss: 0.000954  [11216/33964]
loss: 0.000512  [12016/33964]
loss: 0.000826  [12816/33964]
loss: 0.000957  [13616/33964]
loss: 0.000911  [14416/33964]
loss: 0.000608  [15216/33964]
loss: 0.001189  [16016/33964]
loss: 0.001215  [16816/33964]
loss: 0.001666  [17616/33964]
loss: 0.000423  [18416/33964]
loss: 0.001194  [19216/33964]
loss: 0.000625  [20016/33964]
loss: 0.000940  [20816/33964]
loss: 0.003449  [21616/33964]
loss: 0.001749  [22416/33964]
loss: 0.002202  [23216/33964]
loss: 0.002170  [24016/33964]
loss: 0.000825  [24816/33964]
loss: 0.000795  [25616/33964]
loss: 0.000846  [26416/33964]
loss: 0.001295  [27216/33964]
loss: 0.001125  [28016/33964]
loss: 0.001397  [28816/33964]
loss: 0.002657  [29616/33964]
loss: 0.000715  [30416/33964]
loss: 0.001595  [31216/33964]
loss: 0.001179  [32016/33964]
loss: 0.000918  [32816/33964]
loss: 0.003760  [33616/33964]
Avg loss: 0.001175 

_________________Epoch:5000/5000_______________________
loss: 0.000649  [   16/33964]
loss: 0.001178  [  816/33964]
loss: 0.000784  [ 1616/33964]
loss: 0.001764  [ 2416/33964]
loss: 0.001624  [ 3216/33964]
loss: 0.000725  [ 4016/33964]
loss: 0.001734  [ 4816/33964]
loss: 0.000964  [ 5616/33964]
loss: 0.000991  [ 6416/33964]
loss: 0.000627  [ 7216/33964]
loss: 0.001035  [ 8016/33964]
loss: 0.003209  [ 8816/33964]
loss: 0.000970  [ 9616/33964]
loss: 0.001103  [10416/33964]
loss: 0.001047  [11216/33964]
loss: 0.001675  [12016/33964]
loss: 0.000807  [12816/33964]
loss: 0.001046  [13616/33964]
loss: 0.000574  [14416/33964]
loss: 0.000845  [15216/33964]
loss: 0.000638  [16016/33964]
loss: 0.001065  [16816/33964]
loss: 0.001254  [17616/33964]
loss: 0.000480  [18416/33964]
loss: 0.001166  [19216/33964]
loss: 0.001324  [20016/33964]
loss: 0.001738  [20816/33964]
loss: 0.000408  [21616/33964]
loss: 0.001321  [22416/33964]
loss: 0.000931  [23216/33964]
loss: 0.001444  [24016/33964]
loss: 0.001653  [24816/33964]
loss: 0.001715  [25616/33964]
loss: 0.001097  [26416/33964]
loss: 0.001229  [27216/33964]
loss: 0.000743  [28016/33964]
loss: 0.000407  [28816/33964]
loss: 0.002980  [29616/33964]
loss: 0.000859  [30416/33964]
loss: 0.000906  [31216/33964]
loss: 0.001172  [32016/33964]
loss: 0.001529  [32816/33964]
loss: 0.000666  [33616/33964]
Avg loss: 0.001175 

添加inference model 使用另一组数据推理

单组测试结果
[ML] 小白素人第一个pytorch实战项目其1:一个基于rosbag信息处理,使用机器学习之神经网络方法估算yawrate_第1张图片

推理模型代码见下面

import torch
from torch import nn
import trainingModel
from trainingModel import NeuralNetwork
from trainingModel import CustomDataset
from torch.utils.data import Dataset, DataLoader
# main process        
if __name__ == "__main__":


    data_file = 'validationData.txt'  
    custom_dataset_inference = CustomDataset(data_file)

    batch_size=16
    #  数据加载器

    data_loader_inference = DataLoader(custom_dataset_inference, batch_size=batch_size,shuffle=True, drop_last=True)

    device = (
        "cuda"
        if torch.cuda.is_available()
        else "mps"
        if torch.backends.mps.is_available()
        else "cpu"
    )
    model_f=NeuralNetwork().to(device)
    model_f.load_state_dict(torch.load('VehicalStateperML.pth'))
    model_f.eval()
    # single data inference model
    if 0:
        data_test=[0.010496795875951648,21.383352279663086,0.36893051862716675,0.21288326382637024,-0.005235784687101841,
        0.0,-0.0003738777886610478,-0.8415042161941528,-0.35051336884498596]
        with torch.no_grad():
            input_data = torch.tensor(data_test)
            input_data = input_data.to(device)  
            output = model_f(input_data) 
        print(output) 
    else:
    #muti data inference model
        loss_fn = nn.MSELoss()
        trainingModel.test_loop(data_loader_inference, model_f, loss_fn)

附件为使用数据的训练结果

附我的部分数据:

msg.egoEgoStatus.yawRate: -0.0014590711798518896
msg.egoEgoStatus.linearSpeed: 21.168697357177734
msg.egoEgoStatus.accerationX: 0.46308547258377075
msg.egoEgoStatus.accerationY: 0.11678611487150192
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.0956177711486816
msg.nohCtrlOutput.targetAcceleration: 0.08722390234470367
msg.egoEgoStatus.yawRate: -0.0006383389700204134
msg.egoEgoStatus.linearSpeed: 21.1744441986084
msg.egoEgoStatus.accerationX: 0.42762768268585205
msg.egoEgoStatus.accerationY: 0.2043771594762802
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.0955262184143066
msg.nohCtrlOutput.targetAcceleration: 0.07341756671667099
msg.egoEgoStatus.yawRate: 0.00013275878154672682
msg.egoEgoStatus.linearSpeed: 21.178434371948242
msg.egoEgoStatus.accerationX: 0.401417076587677
msg.egoEgoStatus.accerationY: 0.1625726968050003
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.09548819065094
msg.nohCtrlOutput.targetAcceleration: 0.06061769649386406
msg.egoEgoStatus.yawRate: -0.0006731398170813918
msg.egoEgoStatus.linearSpeed: 21.181203842163086
msg.egoEgoStatus.accerationX: 0.5118013620376587
msg.egoEgoStatus.accerationY: 0.11503671109676361
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.095402717590332
msg.nohCtrlOutput.targetAcceleration: 0.04825887456536293
msg.egoEgoStatus.yawRate: -0.002039732877165079
msg.egoEgoStatus.linearSpeed: 21.192676544189453
msg.egoEgoStatus.accerationX: 0.48107850551605225
msg.egoEgoStatus.accerationY: 0.31369906663894653
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.0952855348587036
msg.nohCtrlOutput.targetAcceleration: 0.035941433161497116
msg.egoEgoStatus.yawRate: -0.0011134763481095433
msg.egoEgoStatus.linearSpeed: 21.197023391723633
msg.egoEgoStatus.accerationX: 0.5030738115310669
msg.egoEgoStatus.accerationY: 0.24472637474536896
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.0554426908493042
msg.nohCtrlOutput.targetAcceleration: 0.021234311163425446
msg.egoEgoStatus.yawRate: -0.0025999497156590223
msg.egoEgoStatus.linearSpeed: 21.210952758789062
msg.egoEgoStatus.accerationX: 0.5420584678649902
msg.egoEgoStatus.accerationY: 0.21159078180789948
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.0553874969482422
msg.nohCtrlOutput.targetAcceleration: 0.007470085285604
msg.egoEgoStatus.yawRate: -0.002924883272498846
msg.egoEgoStatus.linearSpeed: 21.212251663208008
msg.egoEgoStatus.accerationX: 0.5491155385971069
msg.egoEgoStatus.accerationY: 0.1063433587551117
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.0553544759750366
msg.nohCtrlOutput.targetAcceleration: -0.005256068427115679
msg.egoEgoStatus.yawRate: -0.0038749822415411472
msg.egoEgoStatus.linearSpeed: 21.22931480407715
msg.egoEgoStatus.accerationX: 0.42205414175987244
msg.egoEgoStatus.accerationY: 0.15122660994529724
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.0548670291900635
msg.nohCtrlOutput.targetAcceleration: -0.019956741482019424
msg.egoEgoStatus.yawRate: -0.00416160374879837
msg.egoEgoStatus.linearSpeed: 21.240577697753906
msg.egoEgoStatus.accerationX: 0.41135478019714355
msg.egoEgoStatus.accerationY: 0.10188142210245132
msg.egoEgoStatus.steerWheelAngle: -0.005235784687101841
msg.egoEgoStatus.steerWheelAngleRate: 0.0
msg.egoEgoStatus.frontWheelAngle: -0.0003738777886610478
msg.nohCtrlOutput.targetStrAngle: -1.0547878742218018
msg.nohCtrlOutput.targetAcceleration: -0.033937543630599976

你可能感兴趣的:(ML学习系列,学习,python,pytorch,自动驾驶)