近期有ROS2的调研任务需要对ROS2代码进行研究,顺便在此记录一下
老朋友VMware player + Ubuntu20.04
安装的版本为FoxyFitzroy,采用的是源码编译方式,直接参考官方:https://index.ros.org/doc/ros2/Installation/Foxy/Linux-Development-Setup/
唯一需要注意的就是在执行
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
这一步时会出现连接错误,解决方案为添加raw.githubusercontent.com的IP地址到/etc/hosts中,IP地址查询结果在这里:https://githubusercontent.com.ipaddress.com/raw.githubusercontent.com
由于有ROS1的基础,我们直接去~/ros2_foxy/src/ros2/examples看看核心C++示例。
examples/rclcpp/
├── actions
├── composition
├── executors
├── services
├── timers
└── topics
示例有上面几部分,有我们非常熟悉的topics、services、actions、timers,也看到两个看起来陌生的executors和composition。我们先从熟悉的开始。
首先我们打开两个文件,一个是topics/minimal_publisher/lambda.cpp,另一个是topics/minimal_subscriber/lambda.cpp,这俩组成一队发布者和订阅者。
先来看Publisher,打开minimal_publisher/lambda.cpp的那一刻,好家伙,我tm直呼好家伙!满满的C++11风格看起来真心舒服!
#include
#include
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/string.hpp"
using namespace std::chrono_literals;
class MinimalPublisher : public rclcpp::Node
{
public:
MinimalPublisher()
: Node("minimal_publisher"), count_(0)
{
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);
auto timer_callback =
[this]() -> void {
auto message = std_msgs::msg::String();
message.data = "Hello, world! " + std::to_string(this->count_++);
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());
this->publisher_->publish(message);
};
timer_ = this->create_wall_timer(500ms, timer_callback);
}
private:
rclcpp::TimerBase::SharedPtr timer_;
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;
size_t count_;
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<MinimalPublisher>());
rclcpp::shutdown();
return 0;
}
main函数里面首先是init函数,与ROS1里面的ros::init()看起来类似,不过名空间变成了rclcpp,后面少了节点名称参数。稍后我们对rclcpp进行一些分析。
rclcpp::init(argc, argv);
//ROS1
ros::init(argc, argv, "talker");
接下来直接来到了熟悉的spin函数
rclcpp::spin(std::make_shared<MinimalPublisher>());
//ROS1
ros::spin();
spin传入的是一个rclcpp::Node类型的共享指针,似乎spin接口变成了针对特定的node进行操作,可以推测以后一个main函数里面可以spin多个node啦!
看一下spin代码
void rclcpp::spin(rclcpp::node_interfaces::NodeBaseInterface::SharedPtr node_ptr)
{
rclcpp::executors::SingleThreadedExecutor exec;
exec.add_node(node_ptr);
exec.spin();
exec.remove_node(node_ptr);
}
void rclcpp::spin(rclcpp::Node::SharedPtr node_ptr)
{
rclcpp::spin(node_ptr->get_node_base_interface());
}
看到用到了executor,这个executor通过add_node添加多个节点,然后调用spin()接口,
void SingleThreadedExecutor::spin()
{
if (spinning.exchange(true))
{
throw std::runtime_error("spin() called while already spinning");
}
RCLCPP_SCOPE_EXIT(this->spinning.store(false););
while (rclcpp::ok(this->context_) && spinning.load())
{
rclcpp::AnyExecutable any_executable;
if (get_next_executable(any_executable))
{
execute_any_executable(any_executable);
}
}
}
再进一步看下SingleThreadedExecutor的spin实现,里面通过while循环不断监控ok(),这个与ROS1的ros::ok()应该一致,只不过需要传入特定的节点上下文(context)。注意到后面的spinning是一个C++11的atomic_bool数据,表示当前的spin状态。
如果get_next_executable能获取可以执行的AnyExecutable,那么调用execute_any_executable来执行一个AnyExecutable(这里是单线程的executor)。execute_any_executable的实现如下:
void Executor::execute_any_executable(AnyExecutable &any_exec)
{
if (!spinning.load())
{
return;
}
if (any_exec.timer)
{
execute_timer(any_exec.timer);
}
if (any_exec.subscription)
{
execute_subscription(any_exec.subscription);
}
if (any_exec.service)
{
execute_service(any_exec.service);
}
if (any_exec.client)
{
execute_client(any_exec.client);
}
if (any_exec.waitable)
{
any_exec.waitable->execute();
}
// Reset the callback_group, regardless of type
any_exec.callback_group->can_be_taken_from().store(true);
// Wake the wait, because it may need to be recalculated or work that
// was previously blocked is now available.
rcl_ret_t ret = rcl_trigger_guard_condition(&interrupt_guard_condition_);
if (ret != RCL_RET_OK)
{
throw_from_rcl_error(ret, "Failed to trigger guard condition from execute_any_executable");
}
}
挨个判断AnyExecutable里的成员指针是否存在,存在的话就去执行。这里再贴一下AnyExecutable的定义,还是比较清晰的。
struct AnyExecutable
{
RCLCPP_PUBLIC
AnyExecutable();
RCLCPP_PUBLIC
virtual ~AnyExecutable();
// Only one of the following pointers will be set.
rclcpp::SubscriptionBase::SharedPtr subscription;
rclcpp::TimerBase::SharedPtr timer;
rclcpp::ServiceBase::SharedPtr service;
rclcpp::ClientBase::SharedPtr client;
rclcpp::Waitable::SharedPtr waitable;
// These are used to keep the scope on the containing items
rclcpp::CallbackGroup::SharedPtr callback_group;
rclcpp::node_interfaces::NodeBaseInterface::SharedPtr node_base;
};
OK,回到main函数里的rclcpp::spin,看一下spin接受的MinimalPublisher对象。
class MinimalPublisher : public rclcpp::Node
{
public:
MinimalPublisher()
: Node("minimal_publisher"), count_(0)
{
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);
auto timer_callback =
[this]() -> void {
auto message = std_msgs::msg::String();
message.data = "Hello, world! " + std::to_string(this->count_++);
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());
this->publisher_->publish(message);
};
timer_ = this->create_wall_timer(500ms, timer_callback);
}
private:
rclcpp::TimerBase::SharedPtr timer_;
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;
size_t count_;
};
MinimalPublisher必须继承rclcpp::Node类,这样才能让executor去spin它。MinimalPublisher声明了一个rclcpp::Publisher
template <
typename MessageT,
typename AllocatorT = std::allocator<void>,
typename PublisherT = rclcpp::Publisher<MessageT, AllocatorT>>
std::shared_ptr<PublisherT>
create_publisher(
const std::string &topic_name,
const rclcpp::QoS &qos,
const PublisherOptionsWithAllocator<AllocatorT> &options =
PublisherOptionsWithAllocator<AllocatorT>());
这里传入的参数有两个,第一个"topic"是发布的话题名称,第二个参数是发QoS参数。回忆下ROS1中的构造publisher方式:
ros::Publisher chatter_pub = n.advertise<std_msgs::String>("chatter", 1000);
ROS1方式中1000表示发布消息的队列长度,发布速度越快的队列长度应当越大。而在ROS2中传入的第二个数字参数是用来构造rclcpp::QoS,构造函数如下,
QoS::QoS(size_t history_depth)
: QoS(KeepLast(history_depth))
{
}
直接传入数字则默认采用了KeepLast的历史数据策略(RMW_QOS_POLICY_HISTORY_KEEP_LAST),该策略表示至多只保留最新的history_depth个样本数据。QoS的所有历史数据策略如下:
/// QoS history enumerations describing how samples endure
enum RMW_PUBLIC_TYPE rmw_qos_history_policy_t
{
/// Implementation default for history policy
RMW_QOS_POLICY_HISTORY_SYSTEM_DEFAULT,
/// Only store up to a maximum number of samples, dropping oldest once max is exceeded
RMW_QOS_POLICY_HISTORY_KEEP_LAST,
/// Store all samples, subject to resource limits
RMW_QOS_POLICY_HISTORY_KEEP_ALL,
/// History policy has not yet been set
RMW_QOS_POLICY_HISTORY_UNKNOWN
};
create_publisher借助一个rclcpp::PublisherFactory中的函数成员来构造PublisherBase
rclcpp::PublisherBase::SharedPtr
NodeTopics::create_publisher(
const std::string &topic_name,
const rclcpp::PublisherFactory &publisher_factory,
const rclcpp::QoS &qos)
{
// Create the MessageT specific Publisher using the factory, but return it as PublisherBase.
return publisher_factory.create_typed_publisher(node_base_, topic_name, qos);
}
create_typed_publisher成员是一个lambda函数构造的函数对象,这个函数为:
[options](
rclcpp::node_interfaces::NodeBaseInterface * node_base,
const std::string & topic_name,
const rclcpp::QoS & qos
) -> std::shared_ptr<PublisherT>
{
auto publisher = std::make_shared<PublisherT>(node_base, topic_name, qos, options);
// This is used for setting up things like intra process comms which
// require this->shared_from_this() which cannot be called from
// the constructor.
publisher->post_init_setup(node_base, topic_name, qos, options);
return publisher;
}
其中可以看到通过make_shared创建了一个PublisherT模板类型对象,这个模板就是PublisherBase::SharedPtr。
简单看下Publisher的构造函数:
PublisherBase::PublisherBase(rclcpp::node_interfaces::NodeBaseInterface *node_base,
const std::string &topic,
const rosidl_message_type_support_t &type_support,
const rcl_publisher_options_t &publisher_options)
: rcl_node_handle_(node_base->get_shared_rcl_node_handle()),
intra_process_is_enabled_(false), intra_process_publisher_id_(0)
{
...
publisher_handle_ = std::shared_ptr<rcl_publisher_t>(new rcl_publisher_t, custom_deleter);
*publisher_handle_.get() = rcl_get_zero_initialized_publisher();
//初始化rcl_publisher_t
rcl_ret_t ret = rcl_publisher_init(publisher_handle_.get(),
rcl_node_handle_.get(),
&type_support,
topic.c_str(),
&publisher_options);
...
// 创建RMW层的rmw_publisher_t句柄
rmw_publisher_t *publisher_rmw_handle = rcl_publisher_get_rmw_handle(publisher_handle_.get());
...
}
总结下create_publisher的过程:
1. create_publisher(topic_name, options, use_intra_process)
2. Publisher(topic_name, options) //Publisher构造函数
3. rcl_publisher_init(out rcl_publisher_t, rcl_node_t, topic_name, options)//初始化我们的rcl_publisher_t句柄
4. rmw_create_publisher(rmw_node_t, topic_name, qos_options) //创建rmw_publisher_t句柄
这里明确一下ROS2通过中间件(middle ware,RMW)进行消息通信必须创建上面的rmw*句柄,但这不是必须的。比如同一进程内的节点通信并不需要走中间件,只需要进程内通信即可,这就是代码中进程出现的intra-process相关内容。有关intra-process后面进行分析。目前先记住要走中间件通信先rcl句柄,然后rmw句柄。
很好,到这里基本上知道create_publisher是怎么回事了,继续回到咱们自己定义的MinimalPublisher类往下走。
auto timer_callback =
[this]() -> void {
auto message = std_msgs::msg::String();
message.data = "Hello, world! " + std::to_string(this->count_++);
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());
this->publisher_->publish(message);
};
这里用lambda表达式搞了个回调函数(拥抱C++11),用于定时器到时触发。函数内声明了熟悉的std_msg对象并进行了标准的赋值操作,打印日志的写法和级别也和ROS1基本一致。
publish函数内部调用的是rmw_publish_serialized_message,利用了中间层的传输接口。
rmw_ret_t ret = rmw_publish_serialized_message(
publisher->impl->rmw_handle, serialized_message, allocation);
关于中间层DDS的分析留到单独文章进行分析。
GOOD,至此publisher这边就看完了,简单总结下publisher部分的变化:
好的,目前进展顺利,我们切换到Subscriber这边。
#include
#include
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/string.hpp"
class MinimalSubscriber : public rclcpp::Node
{
public:
MinimalSubscriber()
: Node("minimal_subscriber")
{
subscription_ = this->create_subscription<std_msgs::msg::String>(
"topic",
10,
[this](std_msgs::msg::String::UniquePtr msg) {
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg->data.c_str());
});
}
private:
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr subscription_;
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<MinimalSubscriber>());
rclcpp::shutdown();
return 0;
}
main函数如此简洁明了,和publisher的区别就是把spin里的对象换成了MinimalSubscriber,我们直接看一下MinimalSubscriber类:
class MinimalSubscriber : public rclcpp::Node
{
public:
MinimalSubscriber()
: Node("minimal_subscriber")
{
subscription_ = this->create_subscription<std_msgs::msg::String>(
"topic",
10,
[this](std_msgs::msg::String::UniquePtr msg) {
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg->data.c_str());
});
}
private:
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr subscription_;
};
MinimalSubscriber同样继承自rclcpp::Node,它具有一个rclcpp::Subscription
subscription_ = this->create_subscription<std_msgs::msg::String>(
"topic",
10,
[this](std_msgs::msg::String::UniquePtr msg) {
RCLCPP_INFO(this->get_logger(), "I heard: '%s'", msg->data.c_str());
});
create_subscription指定消息类型模板,使用了三个参数,第一个表示话题名称;第二个表示订阅的QoS配置,这里与publisher部分一致;第三个是订阅回调函数,也用了lambda表达式。
create_subscription的声明如下:
template <typename MessageT,
typename CallbackT,
typename AllocatorT,
typename CallbackMessageT,
typename SubscriptionT,
typename MessageMemoryStrategyT>
std::shared_ptr<SubscriptionT> create_subscription(
const std::string &topic_name,
const rclcpp::QoS &qos,
CallbackT &&callback,
const SubscriptionOptionsWithAllocator<AllocatorT> &options =
SubscriptionOptionsWithAllocator<AllocatorT>(),
typename MessageMemoryStrategyT::SharedPtr msg_mem_strat =
(MessageMemoryStrategyT::create_default()));
对比下ROS1中的Subscribe接口:
virtual void subscribe(ros::NodeHandle &nh,
const std::string &topic, uint32_t queue_size,
const ros::TransportHints &transport_hints = ros::TransportHints(),
ros::CallbackQueueInterface *callback_queue = 0) = 0;
可以看出ROS2中增加了消息内存分配的定义,用户可以根据不同消息的特点指定内存分配器,比如图像,这样可以提高执行速度。有STL内味儿了…
create_subscription的实现部分还挺复杂,下面一步一步来进行分析:
准备工作,获取节点话题相关的封装对象:
auto node_topics = get_node_topics_interface(std::forward<NodeT>(node));
接着是进行话题的数据统计及统计信息的发布
if (rclcpp::detail::resolve_enable_topic_statistics(options,*node_topics->get_node_base_interface()))
{
...
//发布统计信息,这部分暂时不过分关注
std::shared_ptr<Publisher<statistics_msgs::msg::MetricsMessage>> publisher =
create_publisher<statistics_msgs::msg::MetricsMessage>(node,
options.topic_stats_options.publish_topic,
qos);
...
};
然后是创建一个SubscriptionFactory辅助对象用来创建订阅关系:
auto factory = rclcpp::create_subscription_factory<MessageT>(std::forward<CallbackT>(callback),
options,
msg_mem_strat,
subscription_topic_stats);
SubscriptionFactory的定义如下,里面其实就是封装了一个create_typed_subscription函数对象
struct SubscriptionFactory
{
// Creates a Subscription object and returns it as a SubscriptionBase.
using SubscriptionFactoryFunction = std::function<rclcpp::SubscriptionBase::SharedPtr(
rclcpp::node_interfaces::NodeBaseInterface *node_base,
const std::string &topic_name,
const rclcpp::QoS &qos)>;
const SubscriptionFactoryFunction create_typed_subscription;
};
有了SubscriptionFactory之后执行:
auto sub = node_topics->create_subscription(topic_name, factory, qos);
得到一个SubscriptionBasePtr的基类指针,create_subscription函数内部直接利用SubscriptionFactory里的function对象:
rclcpp::SubscriptionBase::SharedPtr
NodeTopics::create_subscription(const std::string &topic_name,
const rclcpp::SubscriptionFactory &subscription_factory,
const rclcpp::QoS &qos)
{
// Create the MessageT specific Subscription using the factory, but return a SubscriptionBase.
return subscription_factory.create_typed_subscription(node_base_, topic_name, qos);
}
在Subscription的构造函数中注册了句柄用于监听消息数据
try
{
this->add_event_handler(
[this](QOSRequestedIncompatibleQoSInfo &info) {
this->default_incompatible_qos_callback(info);
},
RCL_SUBSCRIPTION_REQUESTED_INCOMPATIBLE_QOS);
}
add_event_handler首先初始化rcl_subscription_t句柄,然后调用rcl_subscription_event_init初始化rmw_subscription_t句柄,该rmw句柄是ros2和通信中间件的沟通桥梁,后续根据该rmw句柄查询是否有数据需要处理。
至此,一个Subscription创建过程可以总结为:
1. create_subscription(topic_name, callback, options, use_intra_process)
2. Subscription(topic_name, callback, options) //构造函数
3. rcl_subscription_init(out rcl_subscription_t, rcl_node_t, topic_name, options) //rcl句柄创建
4. rmw_create_subscription(rmw_node_t, topic_name, qos_options) : rmw_subscription_t //rmw句柄创建
最后,我们对创建出来的Subscription建立订阅关系:
node_topics->add_subscription(sub, options.callback_group);
这里要注意在ROS2中每个node具有若干个回调函数组:
rclcpp::CallbackGroup::SharedPtr default_callback_group_; //默认的回调组
std::vector<rclcpp::CallbackGroup::WeakPtr> callback_groups_;
add_subscription里面会指定某个回调组进行添加SubscriptionBase:
callback_group->add_subscription(subscription);
最终executor调用Subscription的handle_message方法call用户定义的回调函数:
void Executor::execute_subscription(rclcpp::SubscriptionBase::SharedPtr subscription)
{
rclcpp::MessageInfo message_info;
message_info.get_rmw_message_info().from_intra_process = false;
if (subscription->is_serialized())
{
// This is the case where a copy of the serialized message is taken from
// the middleware via inter-process communication.
std::shared_ptr<SerializedMessage> serialized_msg = subscription->create_serialized_message();
take_and_do_error_handling(
"taking a serialized message from topic",
subscription->get_topic_name(),
[&]() {
return subscription->take_serialized(*serialized_msg.get(), message_info); },
[&]() {
auto void_serialized_msg = std::static_pointer_cast<void>(serialized_msg);
subscription->handle_message(void_serialized_msg, message_info); //这里处理数据
});
subscription->return_serialized_message(serialized_msg);
}
以上从ROS2example的publisher和subscriber代码,简要分析了发布和订阅的内部创建过程,了解了rcl句柄和rmw句柄的创建过程。下一篇记录一下executor的原理以及subscription回调函数的详细处理过程。