chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存

chromium通信系统-ipcz系统(二)-ipcz系统代码实现-同Node通信一文中我们分析了同Node通信的过程,在分析跨Node(跨进程)通信过程前,为了缩小篇幅,作为承上启下,我们先来分析一下Ipcz的通信信道和共享内存机制。
chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存_第1张图片
我们在前面chromium通信系统-ipcz系统(一)-ipcz系统基本概念一文介绍了NodeLink和RouterLink的概念,这两个概念都是比较上层,在一个操作系统中,两个进程要想通信,就要借助ipc机制。 如上图所示,RouterLink之间的通信要借助NodeLink, NodeLink之间的通信要借助PlatformChannel。RouterLinker只不过是将PlatformChannel划分为多个子信道,通过NodeLink管理,消息分发。 PlatformChannel就是对操作系统的ipc机制的包装。 在linux 上使用的是unix域套接字技术。我们还说过,ipcz的z代表零拷贝,那么怎样零拷贝的,ipcz系统借用了另一项进程间通信技术,也就是共享内存。 共享内存一般用于高效的进程间数据共享,但是它缺少控制能力,比如怎么通知接收方有新的消息? 总不能接收方去轮询共享内存的某个标志位吧,这样太低效。 所以在ipcz系统中使用unix domain socket 来传输控制数据。 利用socket文件描述符的io事件触发读写行为,这也是大多数ipc系统的设计范式。

好了,废话不多说,我们来分析信道和共享内存的代码实现。

信道–PlatformChannel

我们以系统中最复杂的一种通信情形分析跨Node通信,就是非broker 和非broker通信的场景。以单元测试为例子
mojo/core/invitation_unittest.cc

 926 TEST_F(MAYBE_InvitationTest, NonBrokerToNonBroker) {
 927   // Tests a non-broker inviting another non-broker to join the network.
 928   MojoHandle host;
 929   base::Process host_process = LaunchChildTestClient(
 930       "NonBrokerToNonBrokerHost", &host, 1, MOJO_SEND_INVITATION_FLAG_NONE);
 931 
 932   // Send a pipe to the host, which it will forward to its launched client.
 933   MessagePipe pipe;
 934   MojoHandle client = pipe.handle0.release().value();
 935   MojoHandle pipe_for_client = pipe.handle1.release().value();
 936   WriteMessageWithHandles(host, "aaa", &pipe_for_client, 1);
 937 
 938   // If the host can successfully invite the client, the client will receive
 939   // this message and we'll eventually receive a message back from it.
 940   WriteMessage(client, "bbb");
 941   EXPECT_EQ("ccc", ReadMessage(client));
 942 
 943   // Signal to the host that it's OK to terminate, then wait for it ack.
 944   WriteMessage(host, "bye");
 945   WaitForProcessToTerminate(host_process);
 946   MojoClose(host);
 947   MojoClose(client);
 948 }
 949

929行启动一个新的进程,后续当前进程与新启动的进程进行通信。 要想通信就要建立通道,所以我们分析LaunchChildTestClient函数就可以看到建立信道的过程。为了方便理解,我们把当前进程称为a进程,启动的子进程称为b进程。其中a进程是broker进程。
mojo/core/invitation_unittest.cc

317 // static
318 base::Process MAYBE_InvitationTest::LaunchChildTestClient(
319     const std::string& test_client_name,
320     MojoHandle* primordial_pipes,
321     size_t num_primordial_pipes,
322     MojoSendInvitationFlags send_flags,
323     MojoProcessErrorHandler error_handler,
324     uintptr_t error_handler_context,
325     base::CommandLine* custom_command_line,
326     base::LaunchOptions* custom_launch_options) {
        ......
343 
344   PlatformChannel channel;
345   PlatformHandle local_endpoint_handle;
346   PrepareToPassRemoteEndpoint(&channel, &launch_options, &command_line);
347   local_endpoint_handle = channel.TakeLocalEndpoint().TakePlatformHandle();
348 ......
361  base::Process child_process = base::SpawnMultiProcessTestChild(
362     test_client_name, command_line, launch_options);
    ......
}

344行函数创建了PlatformChannel实例。
346行将信道的一端作为启动b进程的参数。
347行获取到PlatformChannel信道的一端,后面新启动的进程b持有另一端,双方进行通信。
361-362行启动b进程。

代码先建立PlatformChannel。 PlatformChannel代表了信道,在linux中的实现是unix domain socket。unix domain socket会创建一对socket文件描述符,这两个文件描述符可以互相通信。 但是当前这两个文件描述符是在同进程中。要办法把文件描述符的一段传递给新启动的b进程。 则b进程可以借助这个socket文件描述符和a进程通信。 因为a进程和b进程有父子关系。所以在fork启动b进程后,b进程可以集成a进程打开的socket文件描述符。这样就可以完成通信信道的建立。 接下来我们通过代码具体看它的实现。

162 PlatformChannel::PlatformChannel() {
163   PlatformHandle local_handle;
164   PlatformHandle remote_handle;
165   CreateChannel(&local_handle, &remote_handle);
166   local_endpoint_ = PlatformChannelEndpoint(std::move(local_handle));
167   remote_endpoint_ = PlatformChannelEndpoint(std::move(remote_handle));
168 }

PlatformChannel的构造函数调用了CreateChannel 创建信道,并将信道两端保存在local_handle 和 remote_handle传出变量中,然后将信道两端包装成PlatformChannelEndpoint类型保存在PlatformChannel的成员变量local_endpoint_ 和 remote_endpoint_中。

124 #elif BUILDFLAG(IS_POSIX)
125 void CreateChannel(PlatformHandle* local_endpoint,
126                    PlatformHandle* remote_endpoint) {
127   int fds[2];
      ......
131   PCHECK(socketpair(AF_UNIX, SOCK_STREAM, 0, fds) == 0);
132 
133   // Set non-blocking on both ends.
134   PCHECK(fcntl(fds[0], F_SETFL, O_NONBLOCK) == 0);
135   PCHECK(fcntl(fds[1], F_SETFL, O_NONBLOCK) == 0);
136 
      ......
148 
149   *local_endpoint = PlatformHandle(base::ScopedFD(fds[0]));
150   *remote_endpoint = PlatformHandle(base::ScopedFD(fds[1]));
151   DCHECK(local_endpoint->is_valid());
152   DCHECK(remote_endpoint->is_valid());
153 }

CreateChannel 函数针对不同系统有不同实现,我们这里关注Linux系统,linux系统属于posix 系统。我们忽略NACL 的情况。
131-136行调用socketpair函数创建了unix domain socket 。 并将两个socket 文件描述符设置为非阻塞模式。
149-150行将unix domain socket的两端进行包装为PlatformHandle类型设置给传出参数local_endpoint 和 remote_endpoint。

经过这一步信道就创建完成了。 接下来我们分析a进程是如何将信道一端交给它的子进程b使用的。
mojo/core/invitation_unittest.cc

 104 void PrepareToPassRemoteEndpoint(PlatformChannel* channel,
 105                                  base::LaunchOptions* options,
 106                                  base::CommandLine* command_line,
 107                                  base::StringPiece switch_name = {}) {
 108   std::string value;
      ......
 115   channel->PrepareToPassRemoteEndpoint(&options->fds_to_remap, &value);
       ......
 122   if (switch_name.empty()) {
 123     switch_name = PlatformChannel::kHandleSwitch;
 124   }
 125   command_line->AppendSwitchASCII(std::string(switch_name), value);
 126 }
 127 

PrepareToPassRemoteEndpoint 函数:
115行调用PlatformChannel->PrepareToPassRemoteEndpoint() 函数将自己的remote_endpoint_ 放到启动b进程的参数options->fds_to_remap里面。 options->fds_to_remap 的作用是在创建子进程的过程中命令子进程关闭一些它不应该使用的文件描述符,并且指导子进程应把传递给它的文件描述符映射到哪个文件描述符, 比如这里PlatformChannel remote_endpoint_端文件描述符是100, 子进程需要把它映射到文件描述符3, 那么options->fds_to_remap里面有一项内容就是100->3。 这样子进程启动的时候就会把文件描述符100 dup到3号文件描述符,然后关闭100文件描述符。这样做除了保证安全性,还能使新进程的文件描述符更紧凑排列。
PrepareToPassRemoteEndpoint 函数我们具体不展开分析了,函数最终将映射关系放到了options->fds_to_remap中,并且将PlatformChannel remote_endpoint_端文件描述符映射到子进程的文件描述符值保存在了value字符串中。
125行给启动b进程的命令添加了一个参数 --mojo-platform-channel-handle=${value}, 这样b进程启动后就可以通过mojo-platform-channel-handle参数知道自己是应该使用哪个文件描述符和a进程通信(也就是知道PlatformChannel的一端)。

接下来我们分析b进程启动的过程。

Process SpawnMultiProcessTestChild(const std::string& procname,
                                   const CommandLine& base_command_line,
                                   const LaunchOptions& options) {
  CommandLine command_line(base_command_line);
 ......
  return LaunchProcess(command_line, options);
}

SpawnMultiProcessTestChild 调用LaunchProcess启动b进程


Process LaunchProcess(const std::vector<std::string>& argv,
                      const LaunchOptions& options) {
 ......
  {
    pid = fork();
  }

  // Always restore the original signal mask in the parent.
  if (pid != 0) { 
    // 父进程
 ......
  }

  if (pid < 0) {
    DPLOG(ERROR) << "fork";
    return Process();
  }
  if (pid == 0) {
    // 子进程
  ......
    for (size_t i = 0; i < options.fds_to_remap.size(); ++i) {
      const FileHandleMappingVector::value_type& value =
          options.fds_to_remap[i];
      fd_shuffle1.push_back(InjectionArc(value.first, value.second, false));
      fd_shuffle2.push_back(InjectionArc(value.first, value.second, false));
    }
    .....

    // fd_shuffle1 is mutated by this call because it cannot malloc.
    if (!ShuffleFileDescriptors(&fd_shuffle1))
      _exit(127);

    CloseSuperfluousFds(fd_shuffle2);
 ......

    execvp(executable_path, argv_cstr.data());

    RAW_LOG(ERROR, "LaunchProcess: failed to execvp:");
    RAW_LOG(ERROR, argv_cstr[0]);
    _exit(127);
  } 
.....
  return Process(pid);
}

LaunchProcess 函数通过fork创建了子进程,并在子进程中对文件描述符进行了映射,对其他无关的文件描述符进行关闭,这里PlatformChannel的文件描述符被映射到了mojo-platform-channel-handle参数指定的文件描述符上。最后调用execvp 加载可执行程序文件进行执行。

最终启动命令为
mojo_unittests --mojo-platform-channel-handle=3 --enable-features=TestFeatureForBrowserTest1 --disable-features=TestFeatureForBrowserTest2
也就是说b进程的3号文件描述符是用来和a进程通信的mojo信道一端。

进程b启动后执行的代码为单元测试NonBrokerToNonBrokerHost

DEFINE_TEST_CLIENT(NonBrokerToNonBrokerHost) {
  MojoHandle invitation = AcceptInvitation(MOJO_ACCEPT_INVITATION_FLAG_NONE);
  MojoHandle test = ExtractPipeFromInvitation(invitation);
......
}

函数首先调用AcceptInitation函数
mojo/core/ipcz_driver/invitation.cc

428   static MojoHandle AcceptInvitation(MojoAcceptInvitationFlags flags,
 429                                      base::StringPiece switch_name = {}) {
 430     const auto& command_line = *base::CommandLine::ForCurrentProcess();
 431     PlatformChannelEndpoint channel_endpoint;
 432     if (switch_name.empty()) {
 433       channel_endpoint =
 434           PlatformChannel::RecoverPassedEndpointFromCommandLine(command_line);
 435     } else {
 436       channel_endpoint = PlatformChannel::RecoverPassedEndpointFromString(
 437           command_line.GetSwitchValueASCII(switch_name));
 438     }
 439     MojoPlatformHandle endpoint_handle;
 440     PlatformHandle::ToMojoPlatformHandle(channel_endpoint.TakePlatformHandle(),
 441                                          &endpoint_handle);
 442     CHECK_NE(endpoint_handle.type, MOJO_PLATFORM_HANDLE_TYPE_INVALID);
 443 
 444     MojoInvitationTransportEndpoint transport_endpoint;
 445     transport_endpoint.struct_size = sizeof(transport_endpoint);
 446     transport_endpoint.type = MOJO_INVITATION_TRANSPORT_TYPE_CHANNEL;
 447     transport_endpoint.num_platform_handles = 1;
 448     transport_endpoint.platform_handles = &endpoint_handle;
 449 
 450     MojoAcceptInvitationOptions options;
 451     options.struct_size = sizeof(options);
 452     options.flags = flags;
 453     MojoHandle invitation;
 454     CHECK_EQ(MOJO_RESULT_OK,
 455              MojoAcceptInvitation(&transport_endpoint, &options, &invitation));
 456     return invitation;
 457   }

436-437行根据命令行参数指定的mojo信道文件描述符,创建PlatformChannelEndpoint,也就是PlatformChannel的一端。

444-448行使用这个PlatformChannel的一端创建了一个传输点, 最后调用MojoAcceptInvitation函数,我们进入到MojoAcceptInvitation函数内部看一下PlatformChannel是如何接收数据的。

mojo/core/core_ipcz.cc

MojoResult MojoAcceptInvitationIpcz(
    const MojoInvitationTransportEndpoint* transport_endpoint,
    const MojoAcceptInvitationOptions* options,
    MojoHandle* invitation_handle) {
  if (!transport_endpoint || !invitation_handle ||
      (options && options->struct_size < sizeof(*options))) {
    return MOJO_RESULT_INVALID_ARGUMENT;
  }
  *invitation_handle =
      ipcz_driver::Invitation::Accept(transport_endpoint, options);
  return MOJO_RESULT_OK;
}

MojoAcceptInvitation 实际上会调用MojoAcceptInvitationIpcz函数。 MojoAcceptInvitationIpcz函数又会调用ipcz_driver::Invitation::Accept函数,ipcz_driver::Invitation::Accept函数第一个参数是我们关注的传输点对象。

320 // static
321 MojoHandle Invitation::Accept(
322     const MojoInvitationTransportEndpoint* transport_endpoint,
323     const MojoAcceptInvitationOptions* options) {
      ......
392   IpczHandle portals[kMaxAttachments + 1];
393   IpczDriverHandle transport = CreateTransportForMojoEndpoint(
394       {.source = is_isolated ? Transport::kBroker : Transport::kNonBroker,
395        .destination = Transport::kBroker},
396       *transport_endpoint,
397       {
398           .is_peer_trusted = true,
399           .is_trusted_by_peer = is_elevated,
400           .leak_channel_on_shutdown = leak_transport,
401       });

      ......  
420 
421   IpczResult result = GetIpczAPI().ConnectNode(
422       GetIpczNode(), transport, kMaxAttachments + 1, flags, nullptr, portals);
423   CHECK_EQ(result, IPCZ_RESULT_OK);
424 
      ......
437   return Box(std::move(invitation));
438 }

393-401行创建传输对象,其中一个参数为上层函数传递过来的传输点。 421-422行链接a进程的Node(请求创建NodeLink),此过程开始监听PlatformChannel的文件描述符。 Transport对象代表传输对象,用于包装信道。ipcz进程间通信通过调用Transport 利用信道进行通信。 Transport是ipcz driver 层的对象。
我们先来看Transport对象的创建。

IpczDriverHandle CreateTransportForMojoEndpoint(
    Transport::EndpointTypes endpoint_types,
    const MojoInvitationTransportEndpoint& endpoint,
    const TransportOptions& options,
    base::Process remote_process = base::Process(),
    MojoProcessErrorHandler error_handler = nullptr,
    uintptr_t error_handler_context = 0,
    bool is_remote_process_untrusted = false) {
  CHECK_EQ(endpoint.num_platform_handles, 1u);
  auto handle =
      PlatformHandle::FromMojoPlatformHandle(&endpoint.platform_handles[0]);
  if (!handle.is_valid()) {
    return IPCZ_INVALID_DRIVER_HANDLE;
  }

  auto transport = base::MakeRefCounted<Transport>(
      endpoint_types, PlatformChannelEndpoint(std::move(handle)),
      std::move(remote_process), is_remote_process_untrusted);
  transport->SetErrorHandler(error_handler, error_handler_context);
  transport->set_leak_channel_on_shutdown(options.leak_channel_on_shutdown);
  transport->set_is_peer_trusted(options.is_peer_trusted);
  transport->set_is_trusted_by_peer(options.is_trusted_by_peer);
  return ObjectBase::ReleaseAsHandle(std::move(transport));
}

Transport::Transport(EndpointTypes endpoint_types,
                     PlatformChannelEndpoint endpoint,
                     base::Process remote_process,
                     bool is_remote_process_untrusted)
    : endpoint_types_(endpoint_types),
      remote_process_(std::move(remote_process)),
#if BUILDFLAG(IS_WIN)
      is_remote_process_untrusted_(is_remote_process_untrusted),
#endif
      inactive_endpoint_(std::move(endpoint)) {
}

CreateTransportForMojoEndpoint创建了Transport的实例。 Transport持有PlatformChannel一端,用于跨进程通信, 并通过remote_process记录通信进程是谁,保证安全性。 Transport对象的成员变量inactive_endpoint_就是PlatformChannel的一个端点。

下面来看重头戏。ConnectNode的实现

IpczResult ConnectNode(IpczHandle node_handle,
                       IpczDriverHandle driver_transport,
                       size_t num_initial_portals,
                       IpczConnectNodeFlags flags,
                       const void* options,
                       IpczHandle* initial_portals) {
  ipcz::Node* node = ipcz::Node::FromHandle(node_handle);
.......
  return node->ConnectNode(
      driver_transport, flags,
      absl::Span<IpczHandle>(initial_portals, num_initial_portals));
}

Connect 主要目标是链接信道对端,请求创建NodeLink, 顺带在NodeLink的基础上创建一些RouterLink,相当于预建立链接。 函数首先做了一些参数检查。然后调用Node->ConnectNode 方法连接对端进程。
third_party/ipcz/src/ipcz/node.cc

IpczResult Node::ConnectNode(IpczDriverHandle driver_transport,
                             IpczConnectNodeFlags flags,
 ......
  auto transport =
      MakeRefCounted<DriverTransport>(DriverObject(driver_, driver_transport));
  IpczResult result = NodeConnector::ConnectNode(WrapRefCounted(this),
                                                 transport, flags, portals);
 .......
  return IPCZ_RESULT_OK;
}

Node::ConnectNode 函数首调用NodeConnector::ConnectNode方法与远端Node建立链接。
third_party/ipcz/src/ipcz/node_connector.cc

// static
IpczResult NodeConnector::ConnectNode(
    Ref<Node> node,
    Ref<DriverTransport> transport,
    IpczConnectNodeFlags flags,
    const std::vector<Ref<Portal>>& initial_portals,
    ConnectCallback callback) {
......

  auto [connector, result] = CreateConnector(
      std::move(node), std::move(transport), flags, initial_portals,
      std::move(broker_link), std::move(callback));
  if (result != IPCZ_RESULT_OK) {
    return result;
  }

  if (!share_broker && !connector->ActivateTransport()) {
    // Note that when referring another node to our own broker, we don't
    // activate the transport, since the transport will be passed to the broker.
    // See NodeConnectorForReferrer.
    return IPCZ_RESULT_UNKNOWN;
  }

  if (!connector->Connect()) {
    return IPCZ_RESULT_UNKNOWN;
  }

  return IPCZ_RESULT_OK;
}

NodeConnector::ConnectNode函数主要分三步

  1. 创建链接对象(NodeConnector)
  2. 激活传输点(也就是开始监听消息,文件unix domain socket描述符读写状态变化)
  3. 请求建立链接,NodeLink。

这里我们重点关注信道的使用。所以先不去分析请求建立NodeLink的过程。在分析共享内存时会有介绍。 先来看看Connector对象的创建。分析信道的使用我们只需要盯住Transport 就好了。
third_party/ipcz/src/ipcz/node_connector.cc

std::pair<Ref<NodeConnector>, IpczResult> CreateConnector(
    Ref<Node> node,
    Ref<DriverTransport> transport,
    IpczConnectNodeFlags flags,
    const std::vector<Ref<Portal>>& initial_portals,
    Ref<NodeLink> broker_link,
    NodeConnector::ConnectCallback callback) {
  ......
  if (from_broker) {
    DriverMemoryWithMapping memory =
        NodeLinkMemory::AllocateMemory(node->driver());
    if (!memory.mapping.is_valid()) {
      return {nullptr, IPCZ_RESULT_RESOURCE_EXHAUSTED};
    }

    if (to_broker) {
      return {MakeRefCounted<NodeConnectorForBrokerToBroker>(
                  std::move(node), std::move(transport), std::move(memory),
                  flags, initial_portals, std::move(callback)),
              IPCZ_RESULT_OK};
    }

    return {MakeRefCounted<NodeConnectorForBrokerToNonBroker>(
                std::move(node), std::move(transport), std::move(memory), flags,
                initial_portals, std::move(callback)),
            IPCZ_RESULT_OK};
  }

  if (to_broker) {
    return {MakeRefCounted<NodeConnectorForNonBrokerToBroker>(
                std::move(node), std::move(transport), flags, initial_portals,
                std::move(callback)),
            IPCZ_RESULT_OK};
  }

  if (share_broker) {
    return {MakeRefCounted<NodeConnectorForReferrer>(
                std::move(node), std::move(transport), flags, initial_portals,
                std::move(broker_link), std::move(callback)),
            IPCZ_RESULT_OK};
  }

  if (inherit_broker) {
    return {MakeRefCounted<NodeConnectorForReferredNonBroker>(
                std::move(node), std::move(transport), flags, initial_portals,
                std::move(callback)),
            IPCZ_RESULT_OK};
  }

  return {nullptr, IPCZ_RESULT_INVALID_ARGUMENT};
}

NodeConnector 对象用于链接,建立NodeLink。 这里有好5种Connector

  • NodeConnectorForBrokerToBroker: 用于Broker 和Broker 建立连接使用
  • NodeConnectorForBrokerToNonBroker: 用于Broker 和非Broker 建立连接使用
  • NodeConnectorForNonBrokerToBroker:用于非Broker 和Broker 建立连接使用
  • NodeConnectorForReferrer: 通过broker请求代理链接
  • NodeConnectorForReferredNonBroker: 被引用节点用来等待代理接受的NodeConnector

如果当前Node是broker node,则需要调用NodeLinkMemory::AllocateMemory(node->driver())分配共享内存,后面我们具体分析共享内存机制。这五种Connector都是继承自NodeConnector。都会调用NodeConnector的构造函数。

NodeConnector::NodeConnector(Ref<Node> node,
                             Ref<DriverTransport> transport,
                             IpczConnectNodeFlags flags,
                             std::vector<Ref<Portal>> waiting_portals,
                             ConnectCallback callback)
    : node_(std::move(node)),
      transport_(std::move(transport)),
      flags_(flags),
      waiting_portals_(std::move(waiting_portals)),
      callback_(std::move(callback)) {}

NodeConntor的成员变量node_代表当前Node, transport_成员变量则持有PlatformChannel的一段。注意NodeConnector属于ipcz层, 它通过DriverTransport对象持有ipcz driver层的Transport, Transport 的激活主要是用于开始监听unix domain socket的io事件,处理ipcz 消息。

bool NodeConnector::ActivateTransport() {
  transport_->set_listener(WrapRefCounted(this));
  if (transport_->Activate() != IPCZ_RESULT_OK) {
    RejectConnection();
    return false;
  }
  return true;
}

因为当前是在链接过程,先设置Transport的listener_对象为Connector。也就是当Transport 收到消息的时候回调NodeConnector,完成链接。 调用Transport->Activate()激活传输点。我们来看传输点激活代码。

IpczResult DriverTransport::Activate() {
  // Acquire a self-reference, balanced in NotifyTransport() when the driver
  // invokes its activity handler with IPCZ_TRANSPORT_ACTIVITY_DEACTIVATED.
  IpczHandle handle = ReleaseAsHandle(WrapRefCounted(this));
  return transport_.driver()->ActivateTransport(
      transport_.handle(), handle, NotifyTransport, IPCZ_NO_FLAGS, nullptr);
}

这里几个Transport 比较乱,总结一下它们的而关系。
chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存_第2张图片

可以结合chromium通信系统-ipcz系统(四)-ipcz-分层、和mojo的关系以及handle 一文思考下这里为何要这么设计数据结构。

mojo/core/ipcz_driver/driver.cc

IpczResult IPCZ_API
ActivateTransport(IpczDriverHandle transport_handle,
                  IpczHandle ipcz_transport,
                  IpczTransportActivityHandler activity_handler,
                  uint32_t flags,
                  const void* options) {
  Transport* transport = Transport::FromHandle(transport_handle);
  if (!transport) {
    return IPCZ_RESULT_INVALID_ARGUMENT;
  }

  transport->Activate(ipcz_transport, activity_handler);
  return IPCZ_RESULT_OK;
}

函数最终调用到ipcz driver层的Transport->Activity 函数。

281 bool Transport::Activate(IpczHandle transport,
282                          IpczTransportActivityHandler activity_handler) {
283   scoped_refptr<Channel> channel;
284   std::vector<PendingTransmission> pending_transmissions;
285   {
286     base::AutoLock lock(lock_);
287     if (channel_ || !inactive_endpoint_.is_valid()) {
288       return false;
289     }
290 
291     ipcz_transport_ = transport;
292     activity_handler_ = activity_handler;
293     self_reference_for_channel_ = base::WrapRefCounted(this);
294     channel_ = Channel::CreateForIpczDriver(this, std::move(inactive_endpoint_),
295                                             io_task_runner_);
296     channel_->Start();
        ......
315 
316   return true;
317 }

Activate 函数有两个参数,第一个参数 transport 是ipcz层的DriverTransport,用于接收消息回调的(由ipcz driver 层ipcz 层的回调),第二个参数activity_handler是NotifyTransport,收到消息后会调用NotifyTransport函数,它的第一个参数是DriverTransport,然后DriverTransport会调用它的listener对应回调函数(建立链接之前是NodeConnector)。

291-292行将两个参数保存在Transport的成员变量ipcz_transport_ 和activity_handler_中。

294-295行使用inactive_endpoint_ 创建Channel并调用Start函数启动channel的消息监听。 这个Channe是mojo层的Channel,经过层层包装,将PlatformChannel转化成了IpczDriver层的Channel。真够无语的。 chromium通信系统-ipcz系统(四)-ipcz-分层、和mojo的关系以及handle 一文中说过ipcz 借助mojo Channel进行通信。

mojo/core/channel.cc

// static
scoped_refptr<Channel> Channel::CreateForIpczDriver(
    Delegate* delegate,
    PlatformChannelEndpoint endpoint,
    scoped_refptr<base::SingleThreadTaskRunner> io_task_runner) {
......
  return Create(delegate, ConnectionParams{std::move(endpoint)},
                HandlePolicy::kAcceptHandles, std::move(io_task_runner));
}

mojo/core/channel_posix.cc

scoped_refptr<Channel> Channel::Create(
    Delegate* delegate,
    ConnectionParams connection_params,
    HandlePolicy handle_policy,
    scoped_refptr<base::SingleThreadTaskRunner> io_task_runner) {
......
  return new ChannelLinux(delegate, std::move(connection_params), handle_policy,
                          io_task_runner);
......
}

最终创建了个ChannelLinux,也就是针对linux平台的Channel, 其他平台有不同实现
mojo/core/channel_linux.cc

ChannelLinux::ChannelLinux(
    Delegate* delegate,
    ConnectionParams connection_params,
    HandlePolicy handle_policy,
    scoped_refptr<base::SingleThreadTaskRunner> io_task_runner)
    : ChannelPosix(delegate,
                   std::move(connection_params),
                   handle_policy,
                   io_task_runner),
      num_pages_(g_shared_mem_pages.load()) {}

ChannelPosix::ChannelPosix(
    Delegate* delegate,
    ConnectionParams connection_params,
    HandlePolicy handle_policy,
    scoped_refptr<base::SingleThreadTaskRunner> io_task_runner)
    : Channel(delegate, handle_policy),
      self_(this),
      io_task_runner_(io_task_runner) {
  socket_ = connection_params.TakeEndpoint().TakePlatformHandle().TakeFD();
  CHECK(socket_.is_valid());
}

ChannelLinux 继承自ChannelPosix, ChannelPosix通过socket_持有unix domain socket的一端,用于和另一端通信。ChannelPosix又继承自Channel。
mojo/core/channel.cc

Channel::Channel(Delegate* delegate,
                 HandlePolicy handle_policy,
                 DispatchBufferPolicy buffer_policy)
    : is_for_ipcz_(delegate ? delegate->IsIpczTransport() : false),
      delegate_(delegate),
      handle_policy_(handle_policy),
      read_buffer_(buffer_policy == DispatchBufferPolicy::kManaged
                       ? new ReadBuffer
                       : nullptr) {}

Channel类的成员变量delegate_指向Transport用于调用ipcz driver 层的Transport进行消息处理。ipcz 和mojo在一个过渡期,所以代码才会这么拉胯。下面是Channel的基本数据结构。

chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存_第3张图片

接下来是Channel的启动。

void ChannelPosix::Start() {
  if (io_task_runner_->RunsTasksInCurrentSequence()) {
    StartOnIOThread();
  } else {
    io_task_runner_->PostTask(
        FROM_HERE, base::BindOnce(&ChannelPosix::StartOnIOThread, this));
  }
}

最终调用StartOnIOThread 启动channel。也就是说监听unix domain socket 的工作在io thread。

void ChannelPosix::StartOnIOThread() {
  DCHECK(!read_watcher_);
  DCHECK(!write_watcher_);
  read_watcher_ =
      std::make_unique<base::MessagePumpForIO::FdWatchController>(FROM_HERE);
  base::CurrentThread::Get()->AddDestructionObserver(this);
  write_watcher_ =
      std::make_unique<base::MessagePumpForIO::FdWatchController>(FROM_HERE);
  base::CurrentIOThread::Get()->WatchFileDescriptor(
      socket_.get(), true /* persistent */, base::MessagePumpForIO::WATCH_READ,
      read_watcher_.get(), this);
  base::AutoLock lock(write_lock_);
  FlushOutgoingMessagesNoLock();
}

StartOnIOThread 函数创建了观察unix domain socket读写的两个观察对象,分别是read_watcher_和write_watcher_,并且将他注册到io线程里面,去观察信道的可读、可写事件。当socket可读时,会调用到ChannelPosix->OnFileCanReadWithoutBlocking() , 当socket可写时,会调用到ChannelPosix->OnFileCanWriteWithoutBlocking() 。

文件可写处理的情况无非就是缓冲区满了,发生阻塞,等待可写的时候继续把消息写出去比较简单(OnFileCanWriteWithoutBlocking)。 我们分析收到消息的情况。

mojo/core/channel_posix.cc

270 void ChannelPosix::OnFileCanReadWithoutBlocking(int fd) {
271   CHECK_EQ(fd, socket_.get());
272 
273   bool validation_error = false;
274   bool read_error = false;
275   size_t next_read_size = 0;
276   size_t buffer_capacity = 0;
277   size_t total_bytes_read = 0;
278   size_t bytes_read = 0;
279   do {
280     buffer_capacity = next_read_size;
281     char* buffer = GetReadBuffer(&buffer_capacity);
282     DCHECK_GT(buffer_capacity, 0u);
283 
284     std::vector<base::ScopedFD> incoming_fds;
285     ssize_t read_result =
286         SocketRecvmsg(socket_.get(), buffer, buffer_capacity, &incoming_fds);
287     for (auto& incoming_fd : incoming_fds)
288       incoming_fds_.emplace_back(std::move(incoming_fd));
289 
290     if (read_result > 0) {
291       bytes_read = static_cast<size_t>(read_result);
292       total_bytes_read += bytes_read;
293       if (!OnReadComplete(bytes_read, &next_read_size)) {
294         read_error = true;
295         validation_error = true;
296         break;
297       }
298     } else if (read_result == 0 || (errno != EAGAIN && errno != EWOULDBLOCK)) {
299       read_error = true;
300       break;
301     } else {
302       // We expect more data but there is none to read. The
303       // FileDescriptorWatcher will wake us up again once there is.
304       DCHECK(errno == EAGAIN || errno == EWOULDBLOCK);
305       return;
306     }
307   } while (bytes_read == buffer_capacity &&
308            total_bytes_read < kMaxBatchReadCapacity && next_read_size > 0);
309   if (read_error) {
310     // Stop receiving read notifications.
311     read_watcher_.reset();
312     if (validation_error)
313       OnError(Error::kReceivedMalformedData);
314     else
315       OnError(Error::kDisconnected);
316   }
317 }

整个函数就是常见的非阻塞io读取的逻辑。 通过SocketRecvmsg函数读取消息。读取到完整消息后调用OnReadComplete函数去处理消息。直到出现EAGAIN 或者EWOULDBLOCK 错误,代表缓冲区里面没有数据,就结束读取,等待下一次可读事件。
我们可以注意到SocketRecvmsg函数的参数除了读取数据的缓冲区外还包含了用于接收文件描述符的参数incoming_fds, 也就是说unix domain socket的通道还可以传递文件描述符。 这是跨进程通信常见的手段,比如传递共享内存映射的文件描述符。我们具体分析一下SocketRecvmsg的实现:
mojo/public/cpp/platform/socket_utils_posix.cc

128 ssize_t SocketRecvmsg(base::PlatformFile socket,
129                       void* buf,
130                       size_t num_bytes,
131                       std::vector<base::ScopedFD>* descriptors,
132                       bool block) {
133   struct iovec iov = {buf, num_bytes};
134   char cmsg_buf[CMSG_SPACE(kMaxSendmsgHandles * sizeof(int))];
135   struct msghdr msg = {};
136   msg.msg_iov = &iov;
137   msg.msg_iovlen = 1;
138   msg.msg_control = cmsg_buf;
139   msg.msg_controllen = sizeof(cmsg_buf);
140   ssize_t result =
141       HANDLE_EINTR(recvmsg(socket, &msg, block ? 0 : MSG_DONTWAIT));
142   if (result < 0)
143     return result;
144 
145   if (msg.msg_controllen == 0)
146     return result;
147 
148   DCHECK(!(msg.msg_flags & MSG_CTRUNC));
149 
150   descriptors->clear();
151   for (cmsghdr* cmsg = CMSG_FIRSTHDR(&msg); cmsg;
152        cmsg = CMSG_NXTHDR(&msg, cmsg)) {
153     if (cmsg->cmsg_level == SOL_SOCKET && cmsg->cmsg_type == SCM_RIGHTS) {
154       size_t payload_length = cmsg->cmsg_len - CMSG_LEN(0);
155       DCHECK_EQ(payload_length % sizeof(int), 0u);
156       size_t num_fds = payload_length / sizeof(int);
157       const int* fds = reinterpret_cast<int*>(CMSG_DATA(cmsg));
158       for (size_t i = 0; i < num_fds; ++i) {
159         base::ScopedFD fd(fds[i]);
160         DCHECK(fd.is_valid());
161         descriptors->emplace_back(std::move(fd));
162       }
163     }
164   }

SocketRecvmsg 函数使用recvmsg函数接收消息,也就是对端使用sendmsg发送消息。 sendmsg消息是可以传递文件描述符的。 这里也没什么特别之处。

我们继续看一下OnReadComplete函数,对读取数据的处理。
mojo/core/channel.cc

bool Channel::OnReadComplete(size_t bytes_read, size_t* next_read_size_hint) {
    ......
    DispatchResult result =
        TryDispatchMessage(base::make_span(read_buffer_->occupied_bytes(),
                                           read_buffer_->num_occupied_bytes()),
                           next_read_size_hint);
  ......
  return true;
}

省略一些对buf的管理逻辑,函数的主线就是调用TryDispatchMessage去分发消息。

 962 Channel::DispatchResult Channel::TryDispatchMessage(
 963     base::span<const char> buffer,
 964     size_t* size_hint) {
 965   TRACE_EVENT(TRACE_DISABLED_BY_DEFAULT("toplevel.ipc"),
 966               "Mojo dispatch message");
 967   if (is_for_ipcz_) {
 968     // This has already been validated.
 969     DCHECK_GE(buffer.size(), sizeof(Message::IpczHeader));
 970 
         // 获取IpczHeader
 971     const auto& header =
 972         *reinterpret_cast<const Message::IpczHeader*>(buffer.data());
 973     const size_t header_size = header.size;
 974     const size_t num_bytes = header.num_bytes;
 975     const size_t num_handles = header.num_handles;
         ......
         // 将读取的文件描述符转坏为PlatformHandle
 985     std::vector<PlatformHandle> handles;
 986     if (num_handles > 0) {
 987       if (handle_policy_ == HandlePolicy::kRejectHandles ||
 988           !GetReadPlatformHandlesForIpcz(num_handles, handles)) {
 989         return DispatchResult::kError;
 990       }
 991       if (handles.empty()) {
 992         return DispatchResult::kMissingHandles;
 993       }
 994     }
         // 除去header,是真正的消息
 995     auto data = buffer.first(num_bytes).subspan(header_size);
 996     delegate_->OnChannelMessage(data.data(), data.size(), std::move(handles));
 997     *size_hint = num_bytes;
 998     return DispatchResult::kOK;
 999   }
1000 
       ......
1081 }

函数对ipcz的处理很简单,把header读出来, 把文件描述符读出来包装成PlatformHandle,最后调用Transport->OnChannelMessage 函数给上层处理。

前面我们顺着b进程的AcceptInvitation 函数,看了激活信道的过程,再来看看a进程的激活信道的过程。

void MAYBE_InvitationTest::SendInvitationToClient(
    PlatformHandle endpoint_handle,
    base::ProcessHandle process,
    MojoHandle* primordial_pipes,
    size_t num_primordial_pipes,
    MojoSendInvitationFlags flags,
    MojoProcessErrorHandler error_handler,
    uintptr_t error_handler_context,
    base::StringPiece isolated_invitation_name) {
  MojoPlatformHandle handle;
  PlatformHandle::ToMojoPlatformHandle(std::move(endpoint_handle), &handle);
  CHECK_NE(handle.type, MOJO_PLATFORM_HANDLE_TYPE_INVALID);

  MojoHandle invitation;
  CHECK_EQ(MOJO_RESULT_OK, MojoCreateInvitation(nullptr, &invitation));
 ......

  MojoInvitationTransportEndpoint transport_endpoint;
  transport_endpoint.struct_size = sizeof(transport_endpoint);
  transport_endpoint.type = MOJO_INVITATION_TRANSPORT_TYPE_CHANNEL;
  transport_endpoint.num_platform_handles = 1;
  transport_endpoint.platform_handles = &handle;

  MojoSendInvitationOptions options;
  options.struct_size = sizeof(options);
  options.flags = flags;
  if (flags & MOJO_SEND_INVITATION_FLAG_ISOLATED) {
    options.isolated_connection_name = isolated_invitation_name.data();
    options.isolated_connection_name_length =
        static_cast<uint32_t>(isolated_invitation_name.size());
  }
  CHECK_EQ(MOJO_RESULT_OK,
           MojoSendInvitation(invitation, &process_handle, &transport_endpoint,
                              error_handler, error_handler_context, &options));
}

SendInvitationToClient函数创建invitation对象,并创建了一个MojoInvitationTransportEndpoint对象,作为Transport的一个端点。然后执行MojoSendInvitation函数邀请b进程创建NodeLink。

203 MojoResult Invitation::Send(
204     const MojoPlatformProcessHandle* process_handle,
205     const MojoInvitationTransportEndpoint* transport_endpoint,
206     MojoProcessErrorHandler error_handler,
207     uintptr_t error_handler_context,
208     const MojoSendInvitationOptions* options) {
      ......
285   IpczDriverHandle transport = CreateTransportForMojoEndpoint(
286       {.source = config.is_broker ? Transport::kBroker : Transport::kNonBroker,
287        .destination = is_isolated ? Transport::kBroker : Transport::kNonBroker},
288       *transport_endpoint,
289       {.is_peer_trusted = is_peer_elevated, .is_trusted_by_peer = true},
290       std::move(remote_process), error_handler, error_handler_context,
291       is_remote_process_untrusted);
292   if (transport == IPCZ_INVALID_DRIVER_HANDLE) {
293     return MOJO_RESULT_INVALID_ARGUMENT;
294   }
295 
296   if (num_attachments_ == 0 || max_attachment_index_ != num_attachments_ - 1) {
297     return MOJO_RESULT_FAILED_PRECONDITION;
298   }
299 
300   // Note that we reserve the first initial portal for internal use, hence the
301   // additional (kMaxAttachments + 1) portal here. Portals corresponding to
302   // application-provided attachments begin at index 1.
303   IpczHandle portals[kMaxAttachments + 1];
304   IpczResult result = GetIpczAPI().ConnectNode(
305       GetIpczNode(), transport, num_attachments_ + 1, flags, nullptr, portals);
306   if (result != IPCZ_RESULT_OK) {
307     return result;
308   }
      ......
317   return MOJO_RESULT_OK;
318 }

函数同样调用CreateTransportForMojoEndpoint函数创建Transport。 然后调用 GetIpczAPI().ConnectNode()链接对端,这和b进程基本是一致的,中间还有激活传输点的动作。到这里包括信道建立和激活,以及两个进程如何通信的过程我们就分析完成了。

共享内存

在创建Connector的时候如果是broker node,需要创建共享内存,在我们前面分析的场景中a进程是broker node进程,需要创建共享内存。我们下面分析a进程是如何将共享内存传递给b进程,以及如何使用共享内存。我们回到创建NodeConnector的过程进行分析。
third_party/ipcz/src/ipcz/node_connector.cc

std::pair<Ref<NodeConnector>, IpczResult> CreateConnector(
    Ref<Node> node,
    Ref<DriverTransport> transport,
    IpczConnectNodeFlags flags,
    const std::vector<Ref<Portal>>& initial_portals,
    Ref<NodeLink> broker_link,
    NodeConnector::ConnectCallback callback) {
  ......
  if (from_broker) {
    DriverMemoryWithMapping memory =
        NodeLinkMemory::AllocateMemory(node->driver());
    if (!memory.mapping.is_valid()) {
      return {nullptr, IPCZ_RESULT_RESOURCE_EXHAUSTED};
    }

    if (to_broker) {
      return {MakeRefCounted<NodeConnectorForBrokerToBroker>(
                  std::move(node), std::move(transport), std::move(memory),
                  flags, initial_portals, std::move(callback)),
              IPCZ_RESULT_OK};
    }

    return {MakeRefCounted<NodeConnectorForBrokerToNonBroker>(
                std::move(node), std::move(transport), std::move(memory), flags,
                initial_portals, std::move(callback)),
            IPCZ_RESULT_OK};
  }

  if (to_broker) {
    return {MakeRefCounted<NodeConnectorForNonBrokerToBroker>(
                std::move(node), std::move(transport), flags, initial_portals,
                std::move(callback)),
            IPCZ_RESULT_OK};
  }

  if (share_broker) {
    return {MakeRefCounted<NodeConnectorForReferrer>(
                std::move(node), std::move(transport), flags, initial_portals,
                std::move(broker_link), std::move(callback)),
            IPCZ_RESULT_OK};
  }

  if (inherit_broker) {
    return {MakeRefCounted<NodeConnectorForReferredNonBroker>(
                std::move(node), std::move(transport), flags, initial_portals,
                std::move(callback)),
            IPCZ_RESULT_OK};
  }

  return {nullptr, IPCZ_RESULT_INVALID_ARGUMENT};
}

NodeConnector 对象用于链接建立NodeLink。 这里有好5种Connector

  • NodeConnectorForBrokerToBroker: 用于Broker 和Broker 建立连接使用
  • NodeConnectorForBrokerToNonBroker: 用于Broker 和非Broker 建立连接使用
  • NodeConnectorForNonBrokerToBroker:用于非Broker 和Broker 建立连接使用
  • NodeConnectorForReferrer: 通过broker请求代理链接
  • NodeConnectorForReferredNonBroker: 被引用节点用来等待代理接受的NodeConnector

我们以a 进程为broker, b进程为非broker的情景分析共享内存的建立和使用。 a进程为broker的情况from_broker为真,所以调用NodeLinkMemory::AllocateMemory(node->driver()) 分配共享内存。
ipcz/node_link_memory.cc

200 // static
201 DriverMemoryWithMapping NodeLinkMemory::AllocateMemory(
202     const IpczDriver& driver) {
203   DriverMemory memory(driver, kPrimaryBufferSize);
204   if (!memory.is_valid()) {
205     return {};
206   }
207 
208   DriverMemoryMapping mapping = memory.Map();
209   if (!mapping.is_valid()) {
210     return {};
211   }
212 
213   PrimaryBuffer& primary_buffer =
214       *reinterpret_cast<PrimaryBuffer*>(mapping.bytes().data());
215 
216   // The first allocable BufferId is 1, because the primary buffer uses 0.
217   primary_buffer.header.next_buffer_id.store(1, std::memory_order_relaxed);
218 
219   // The first allocable SublinkId is kMaxInitialPortals. This way it doesn't
220   // matter whether the two ends of a NodeLink initiate their connection with a
221   // different initial portal count: neither can request more than
222   // kMaxInitialPortals, so neither will be assuming initial ownership of any
223   // SublinkIds at or above this value.
224   primary_buffer.header.next_sublink_id.store(kMaxInitialPortals,
225                                               std::memory_order_relaxed);
226 
227   // Note: InitializeRegion() performs an atomic release, so atomic stores
228   // before this section can be relaxed.
229   primary_buffer.block_allocator_64().InitializeRegion();
230   primary_buffer.block_allocator_256().InitializeRegion();
231   primary_buffer.block_allocator_512().InitializeRegion();
232   primary_buffer.block_allocator_1k().InitializeRegion();
233   primary_buffer.block_allocator_2k().InitializeRegion();
234   primary_buffer.block_allocator_4k().InitializeRegion();
235   return {std::move(memory), std::move(mapping)};
236 }

203 行在/dev/shm(tmpfs)下打开一个文件
208 将这个文件通过mmap映射到自己的内存空间(后面可以通过unix domain socket 将文件描述符传输到b进程,b进程再映射到自己的内存空间,这样就可以实现内存共享)
213-234 将这块内存转成PrimaryBuffer数据结构。 并进行了初始化。

我们先来看下共享内存的创建
third_party/ipcz/src/ipcz/driver_memory.cc

DriverMemory::DriverMemory(const IpczDriver& driver, size_t num_bytes)
    : size_(num_bytes) {
  ABSL_ASSERT(num_bytes > 0);
  IpczDriverHandle handle;
  IpczResult result =
      driver.AllocateSharedMemory(num_bytes, IPCZ_NO_FLAGS, nullptr, &handle);
  ABSL_ASSERT(result == IPCZ_RESULT_OK);
  memory_ = DriverObject(driver, handle);
}

又到了ipcz driver层,调用driver.AllocateSharedMemory申请共享内存,并且通过传出参数handle 持有driver 层共享内存对象(SharedBuffer)。然后将这个对象保存到DriverObject 中。

mojo/core/ipcz_driver/driver.cc

IpczResult IPCZ_API AllocateSharedMemory(size_t num_bytes,
                                         uint32_t flags,
                                         const void* options,
                                         IpczDriverHandle* driver_memory) {
  auto region = base::UnsafeSharedMemoryRegion::Create(num_bytes);
  *driver_memory = SharedBuffer::ReleaseAsHandle(
      SharedBuffer::MakeForRegion(std::move(region)));
  return IPCZ_RESULT_OK;
}

AllocateSharedMemory函数创建一块内存区域,然后将这块内存转成SharedBuffer对象,再转成IpczDriverHandle 赋值给传出对象。
我们先来看UnsafeSharedMemoryRegion的创建

UnsafeSharedMemoryRegion UnsafeSharedMemoryRegion::Create(size_t size) {
......

  subtle::PlatformSharedMemoryRegion handle =
      subtle::PlatformSharedMemoryRegion::CreateUnsafe(size);

  return UnsafeSharedMemoryRegion(std::move(handle));
}

Create函数先实例化PlatformSharedMemoryRegion对象,并使用UnsafeSharedMemoryRegion包装
base/memory/platform_shared_memory_region.cc

PlatformSharedMemoryRegion PlatformSharedMemoryRegion::CreateUnsafe(
    size_t size) {
  return Create(Mode::kUnsafe, size);
}

base/memory/platform_shared_memory_region_posix.cc

168 // static
169 PlatformSharedMemoryRegion PlatformSharedMemoryRegion::Create(Mode mode,
170                                                               size_t size
171 #if BUILDFLAG(IS_LINUX) || BUILDFLAG(IS_CHROMEOS)
172                                                               ,
173                                                               bool executable
174 #endif
175 ) {
      ......
195 
196   // We don't use shm_open() API in order to support the --disable-dev-shm-usage
197   // flag.
      // 获取/dev/shm 文件夹(tmpfs)
198   FilePath directory;
199   if (!GetShmemTempDir(
200 #if BUILDFLAG(IS_LINUX) || BUILDFLAG(IS_CHROMEOS)
201           executable,
202 #else
203           false /* executable */,
204 #endif
205           &directory)) {
206     return {};
207   }
208 
      // 在/dev/shm下创建一个临时文件并发开,返回文件描述符。
209   FilePath path;
210   ScopedFD fd = CreateAndOpenFdForTemporaryFileInDir(directory, &path);
211   File shm_file(fd.release());
212 
      ......
225 
226   // Deleting the file prevents anyone else from mapping it in (making it
227   // private), and prevents the need for cleanup (once the last fd is
228   // closed, it is truly freed).
229   ScopedPathUnlinker path_unlinker(&path);
230 
231   ScopedFD readonly_fd;
232   if (mode == Mode::kWritable) {
233     // Also open as readonly so that we can ConvertToReadOnly().
        // 如果要求可写模式打开, 再以只读模式打开文件,放到readonly_fd变量中,因为PlatformSharedMemoryRegion需要这个参数
234     readonly_fd.reset(HANDLE_EINTR(open(path.value().c_str(), O_RDONLY)));
235     if (!readonly_fd.is_valid()) {
236       DPLOG(ERROR) << "open(\"" << path.value() << "\", O_RDONLY) failed";
237       return {};
238     }
239   }
240  
      // 根据参数扩充文件大小
241   if (!AllocateFileRegion(&shm_file, 0, size)) {
242     return {};
243   }
244 
     ......
264 
265   return PlatformSharedMemoryRegion(
266       {ScopedFD(shm_file.TakePlatformFile()), std::move(readonly_fd)}, mode,
267       size, UnguessableToken::Create());
268 #endif  // !BUILDFLAG(IS_NACL)
269 }
270 


PlatformSharedMemoryRegion::PlatformSharedMemoryRegion(
    ScopedFDPair handle,
    Mode mode,
    size_t size,
    const UnguessableToken& guid)
    : handle_(std::move(handle)), mode_(mode), size_(size), guid_(guid) {}

UnsafeSharedMemoryRegion::UnsafeSharedMemoryRegion(
    subtle::PlatformSharedMemoryRegion handle)
    : handle_(std::move(handle)) {
  if (handle_.IsValid()) {
    CHECK_EQ(handle_.GetMode(),
             subtle::PlatformSharedMemoryRegion::Mode::kUnsafe);
  }
}

PlatformSharedMemoryRegion::Create的主要逻辑是在/dev/shm 创建一个文件并打开,在posix系统/dev/shm 的挂载点一般挂在的是tmpfs,也就是内存文件,在这个目录下面创建的文件内容会直接存储在内存中。之后调用AllocateFileRegion将文件设置为要求的大小,最后将打开的文件描述符传递给PlatformSharedMemoryRegion对象。 PlatformSharedMemoryRegion可以同时接受只读的文件描述符和可读写的文件描述符。

PlatformSharedMemoryRegion为底层的对象。SharedBuffer是ipcz driver层的对象,是要给上层ipcz层使用的,我们来看它的实现。

SharedBuffer::SharedBuffer(base::subtle::PlatformSharedMemoryRegion region)
    : region_(std::move(region)) {}

我们来总结一下共享内存相关的数据结构
chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存_第4张图片

我们继续看a进程如何把打开的文件描述符映射到自己的内存中。

DriverMemoryMapping DriverMemory::Map() {
......
  volatile void* address;
  IpczDriverHandle mapping_handle;
  IpczResult result = memory_.driver()->MapSharedMemory(
      memory_.handle(), 0, nullptr, &address, &mapping_handle);
......
  // TODO(https://crbug.com/1451717): Propagate the volatile qualifier on
  // `address`.
  return DriverMemoryMapping(*memory_.driver(), mapping_handle,
                             const_cast<void*>(address), size_);
}

Map() 函数调用ipcz driver层的MapSharedMemory 对打开的文件描述符进行mmap映射,映射到自己的内存空间,然后创建DriverMemoryMapping对象封装DriverMemory 和 映射后的地址。 DriverMemory描述了共享内存对象文件描述符、大小模式等信息,用于序列化到b进程,知道b进程进行内存映射。

mojo/core/ipcz_driver/driver.cc

275 IpczResult IPCZ_API MapSharedMemory(IpczDriverHandle driver_memory,
276                                     uint32_t flags,
277                                     const void* options,
278                                     volatile void** address,
279                                     IpczDriverHandle* driver_mapping) {
280   SharedBuffer* buffer = SharedBuffer::FromHandle(driver_memory);
281   if (!buffer || !driver_mapping) {
282     return IPCZ_RESULT_INVALID_ARGUMENT;
283   }
284 
285   scoped_refptr<SharedBufferMapping> mapping =
286       SharedBufferMapping::Create(buffer->region());
287   if (!mapping) {
288     return IPCZ_RESULT_RESOURCE_EXHAUSTED;
289   }
290 
291   *address = mapping->memory();
292   *driver_mapping = SharedBufferMapping::ReleaseAsHandle(std::move(mapping));
293   return IPCZ_RESULT_OK;
294 }

285-286行使用SharedBuffer 创建SharedBufferMapping,然后291行调用memory方法获取映射的内存地址,之后包装成driver_mapping对象。
mojo/core/ipcz_driver/shared_buffer_mapping.cc

 78 // static
 79 scoped_refptr<SharedBufferMapping> SharedBufferMapping::Create(
 80     base::subtle::PlatformSharedMemoryRegion& region) {
 81   void* memory;
 82   auto mapping = MapPlatformRegion(region, 0, region.GetSize(), &memory);
 83   if (!mapping) {
 84     return nullptr;
 85   }
 86 
 87   return base::MakeRefCounted<SharedBufferMapping>(std::move(mapping), memory);
 88 }
 89 

函数使用MapPlatformRegion 将文件映射到本进程地址空间。MapPlatformRegion返回映射后的地址,最终创建SharedBufferMapping对象,返回给调用者。

std::unique_ptr<base::SharedMemoryMapping> MapPlatformRegion(
    base::subtle::PlatformSharedMemoryRegion& region,
    size_t offset,
    size_t size,
    void** memory) {
  using Mode = base::subtle::PlatformSharedMemoryRegion::Mode;
  switch (region.GetMode()) {
    case Mode::kReadOnly:
      return MapRegion<base::ReadOnlySharedMemoryRegion>(region, offset, size,
                                                         memory);
    case Mode::kWritable:
      return MapRegion<base::WritableSharedMemoryRegion>(region, offset, size,
                                                         memory);
    case Mode::kUnsafe:
      return MapRegion<base::UnsafeSharedMemoryRegion>(region, offset, size,
                                                       memory);
  }
  return nullptr;
}

MapPlatformRegion函数根据PlatformSharedMemoryRegion的不同模式创建不同的MemoryRegion对象,我们这里使用的是UnsafeSharedMemoryRegion。

template <typename RegionType>
std::unique_ptr<base::SharedMemoryMapping> MapRegion(
    base::subtle::PlatformSharedMemoryRegion& region,
    size_t offset,
    size_t size,
    void** memory) {
  auto r = RegionType::Deserialize(std::move(region));
  typename RegionType::MappingType m = r.MapAt(offset, size);
  region = RegionType::TakeHandleForSerialization(std::move(r));
  if (!m.IsValid()) {
    return nullptr;
  }
  *memory = const_cast<void*>(m.memory());
  return std::make_unique<typename RegionType::MappingType>(std::move(m));
}

这里RegionType为UnsafeSharedMemoryRegion, 调用它的MapAt方法映射。
base/memory/unsafe_shared_memory_region.cc

WritableSharedMemoryMapping UnsafeSharedMemoryRegion::MapAt(
    uint64_t offset,
    size_t size,
    SharedMemoryMapper* mapper) const {
  if (!IsValid())
    return {};

  auto result = handle_.MapAt(offset, size, mapper);
  if (!result.has_value())
    return {};

  return WritableSharedMemoryMapping(result.value(), size, handle_.GetGUID(),
                                     mapper);
}

实际上调用PlatformSharedMemoryRegion::MapAt() 方法映射
base/memory/platform_shared_memory_region.cc

 42 absl::optional<span<uint8_t>> PlatformSharedMemoryRegion::MapAt(
 43     uint64_t offset,
 44     size_t size,
 45     SharedMemoryMapper* mapper) const {
      ......
 63 
 64   if (!mapper)
 65     mapper = SharedMemoryMapper::GetDefaultInstance();
 66 
      ......
 75   auto result = mapper->Map(GetPlatformHandle(), write_allowed, aligned_offset,
 76                             size + adjustment_for_alignment);
 77 
      ......
 89   return result;

MapAt方法调用mapper->Map方法进行映射。
base/memory/platform_shared_memory_mapper_posix.cc

absl::optional<span<uint8_t>> PlatformSharedMemoryMapper::Map(
    subtle::PlatformSharedMemoryHandle handle,
    bool write_allowed,
    uint64_t offset,
    size_t size) {
  void* address =
      mmap(nullptr, size, PROT_READ | (write_allowed ? PROT_WRITE : 0),
           MAP_SHARED, handle.fd, checked_cast<off_t>(offset));

  if (address == MAP_FAILED) {
    DPLOG(ERROR) << "mmap " << handle.fd << " failed";
    return absl::nullopt;
  }

  return make_span(reinterpret_cast<uint8_t*>(address), size);
}

层层调用最终很简单,调用mmap方法进行映射。

WritableSharedMemoryMapping::WritableSharedMemoryMapping(
    span<uint8_t> mapped_span,
    size_t size,
    const UnguessableToken& guid,
    SharedMemoryMapper* mapper)
    : SharedMemoryMapping(mapped_span, size, guid, mapper) {}

}  // namespace base

SharedMemoryMapping::SharedMemoryMapping(span<uint8_t> mapped_span,
                                         size_t size,
                                         const UnguessableToken& guid,
                                         SharedMemoryMapper* mapper)
    : mapped_span_(mapped_span), size_(size), guid_(guid), mapper_(mapper) {
  // Note: except on Windows, `mapped_span_.size() == size_`.
  SharedMemoryTracker::GetInstance()->IncrementMemoryUsage(*this);
}

经过层层调用函数完成了将文件映射到内存中的工作。下面我们来总结一下SharedBufferMapping相关数据结构。
base/memory/writable_shared_memory_region.h
chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存_第5张图片

到这里就算映射完成了。我们回来看对共享内存的管理。
回顾一下AllocateMemory 函数

DriverMemoryWithMapping NodeLinkMemory::AllocateMemory(
    const IpczDriver& driver) {
  DriverMemory memory(driver, kPrimaryBufferSize);
  if (!memory.is_valid()) {
    return {};
  }

  DriverMemoryMapping mapping = memory.Map();
  if (!mapping.is_valid()) {
    return {};
  }

  PrimaryBuffer& primary_buffer =
      *reinterpret_cast<PrimaryBuffer*>(mapping.bytes().data());

  // The first allocable BufferId is 1, because the primary buffer uses 0.
  primary_buffer.header.next_buffer_id.store(1, std::memory_order_relaxed);

  // The first allocable SublinkId is kMaxInitialPortals. This way it doesn't
  // matter whether the two ends of a NodeLink initiate their connection with a
  // different initial portal count: neither can request more than
  // kMaxInitialPortals, so neither will be assuming initial ownership of any
  // SublinkIds at or above this value.
  primary_buffer.header.next_sublink_id.store(kMaxInitialPortals,
                                              std::memory_order_relaxed);

  // Note: InitializeRegion() performs an atomic release, so atomic stores
  // before this section can be relaxed.
  primary_buffer.block_allocator_64().InitializeRegion();
  primary_buffer.block_allocator_256().InitializeRegion();
  primary_buffer.block_allocator_512().InitializeRegion();
  primary_buffer.block_allocator_1k().InitializeRegion();
  primary_buffer.block_allocator_2k().InitializeRegion();
  primary_buffer.block_allocator_4k().InitializeRegion();
  return {std::move(memory), std::move(mapping)};
}

前面我们分析了共享内存的映射,有了这块内存之后进程如何使用? 在ipcz中把它转化为PrimaryBuffer 类进行使用。PrimaryBuffer主要作用是描述整个内存布局。前面映射的内存大小为kPrimaryBufferSize,在linux上是128k。我们来看一下PrimaryBuffer对这128k数据是如何布局的。
chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存_第6张图片

PrimaryBuffer描述的内存布局最开始是PrimaryBufferHeader结构,描述了下一个buffer_id, next_sublink_id 用于描述下一个remote router link的id。next_sublink_id是从0开始的数字,每个新的RemoteRouterLink都会占用一个id, kMaxInitialPortals的值为12, 因此在建立NodeLink的时候会初始化12(kMaxInitialPortals)个RemoteRouterLink。PrimaryBufferHeader之后的区域是保留的一段空间。保留空间之后是12(kMaxInitialPortals)个RouterLinkerState,用于维护RemoteRouterLink两端的状态,用于绕过代理。再后面是不同大小的连续内存数据块池,包括1484个 64B的内存块池、9个256B的内存块池、8个512B的内存块池、4个1024B的内存块池、4个2048B的内存块和4个4096b的内存块池。 后续内存分配的时候都按块从对应的区域进行分配,这样方便内存管理。

每个块的结构如下
chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存_第7张图片
块池中没有被分配走的块叫FreeBlock, 并且每个块都有一个下标, BlockHeader里面的next指针指向下一个空闲块,初始化块池里面最后的空闲块的next空闲块是第一个空闲块。primary_buffer.block_allocator_4k().InitializeRegion()的作用就是把最后一个空闲块的next指向第一个空闲块。

到这里内存块的管理我们了解清楚了。接下来看一下a进程如何把共享内存映射到b进程的。

AllocateMemory函数最终返回了DriverMemoryWithMapping实例。

struct DriverMemoryWithMapping {
   ......

  DriverMemory memory;
  DriverMemoryMapping mapping;
};

DriverMemoryWithMapping分别保存了DriverMemory(对应文件描述符、长度、打开模式)和 DriverMemoryMapping(映射地址和大小)对象,前面说过a进程是Broker node进程。所以创建的NodeConnector为NodeConnectorForBrokerToNonBroker

NodeConnectorForBrokerToNonBroker(Ref<Node> node,
                                    Ref<DriverTransport> transport,
                                    DriverMemoryWithMapping memory,
                                    IpczConnectNodeFlags flags,
                                    std::vector<Ref<Portal>> waiting_portals,
                                    ConnectCallback callback)
      : NodeConnector(std::move(node),
                      std::move(transport),
                      flags,
                      std::move(waiting_portals),
                      std::move(callback)),
        link_memory_allocation_(std::move(memory)) {
    ABSL_HARDENING_ASSERT(link_memory_allocation_.mapping.is_valid());
  }

NodeConnectorForBrokerToNonBroker的link_memory_allocation_成员变量保存了DriverMemoryMapping对象。链接Node建立NodeLink的代码实现如下:

third_party/ipcz/src/ipcz/node_connector.cc

528 // static
529 IpczResult NodeConnector::ConnectNode(
530     Ref<Node> node,
531     Ref<DriverTransport> transport,
532     IpczConnectNodeFlags flags,
533     const std::vector<Ref<Portal>>& initial_portals,
534     ConnectCallback callback) {
      ......
554   if (!share_broker && !connector->ActivateTransport()) {
555     // Note that when referring another node to our own broker, we don't
556     // activate the transport, since the transport will be passed to the broker.
557     // See NodeConnectorForReferrer.
558     return IPCZ_RESULT_UNKNOWN;
559   }
560 
561   if (!connector->Connect()) {
562     return IPCZ_RESULT_UNKNOWN;
563   }
564 
565   return IPCZ_RESULT_OK;
566 }
567 

传输点的激活我们前面已经看到了, 有了传出通道之后就能请求b进程建立NodeLink了。NodeConnectorForBrokerToNonBroker的Connect函数实现如下

 49   // NodeConnector:
 50   bool Connect() override {
        ......
 57     msg::ConnectFromBrokerToNonBroker connect;
 58     connect.params().broker_name = broker_name_;
 59     connect.params().receiver_name = new_remote_node_name_;
 60     connect.params().protocol_version = msg::kProtocolVersion;
 61     connect.params().num_initial_portals =
 62         checked_cast<uint32_t>(num_portals());
 63     connect.params().buffer = connect.AppendDriverObject(
 64         link_memory_allocation_.memory.TakeDriverObject());
 65     connect.params().padding = 0;
 66     return IPCZ_RESULT_OK == transport_->Transmit(connect);
 67   }

57行创建了一个 msg::ConnectFromBrokerToNonBroker 消息对象,然后设置了它的参数:

  • broker_name: 是broker的名字标识
  • receiver_name: a broker 给b进程node起的名字
  • protocol_version 协议版本
  • num_initial_portals初始化链接的端口
  • buffer: 共享内存对象 (DriverObject)

66行调用transport->Transmit 把msg::ConnectFromBrokerToNonBroker 这个message发送出去。我们重点看一下如何将共享内存对象放入消息的
third_party/ipcz/src/ipcz/message.cc

uint32_t Message::AppendDriverObject(DriverObject object) {
  if (!object.is_valid()) {
    return internal::kInvalidDriverObjectIndex;
  }

  const uint32_t index = checked_cast<uint32_t>(driver_objects_.size());
  driver_objects_.push_back(std::move(object));
  return index;
}

msg::ConnectFromBrokerToNonBroker继承自Message对象,Message对象成员变量driver_objects_存放用于传输的DriverObject, 这里把DriverMemory对应的DriverObject放到了driver_objects_中(也就是指向SharedBuffer, 该对象持有共享内存对应的文件描述符),并把它在driver_objects_里的坐标设置到connect.params().buffer 中(后面用途找到共享内存对应的DriverObject),这样对端收到链接请求后就能获取到对应的共享内存文件描述符,再进行映射。
我们具体来看下这个过程

IpczResult DriverTransport::Transmit(Message& message) {
  ABSL_ASSERT(message.CanTransmitOn(*this));
  if (!message.Serialize(*this)) {
    // If serialization fails despite the object appearing to be serializable,
    // we have to assume the transport is in a dysfunctional state and will be
    // torn down by the driver soon. Discard the transmission.
    return IPCZ_RESULT_FAILED_PRECONDITION;
  }

  const absl::Span<const uint8_t> data = message.data_view();
  const absl::Span<const IpczDriverHandle> handles =
      message.transmissible_driver_handles();
  return transport_.driver()->Transmit(transport_.handle(), data.data(),
                                       data.size(), handles.data(),
                                       handles.size(), IPCZ_NO_FLAGS, nullptr);
}

Transmit 函数先对消息进行序列化, 然后ipcz driver的Transmit进行数据传输。我们先来看消息序列化的过程。

third_party/ipcz/src/ipcz/message.cc

245 bool Message::Serialize(const DriverTransport& transport) {
246   ABSL_ASSERT(CanTransmitOn(transport));
247   if (driver_objects_.empty()) {
248     return true;
249   }
250 
251   const uint32_t array_offset =
252       AllocateArray<internal::DriverObjectData>(driver_objects_.size());
253   header().driver_object_data_array = array_offset;
254 
255   // NOTE: In Chromium, a vast majority of IPC messages have 0, 1, or 2 OS
256   // handles attached. Since these objects are small, we inline some storage on
257   // the stack to avoid some heap allocation in the most common cases.
258   absl::InlinedVector<IpczDriverHandle, 2> transmissible_handles;
259   bool ok = true;
260   for (size_t i = 0; i < driver_objects().size(); ++i) {
261     internal::DriverObjectData data = {};
262     ok &= SerializeDriverObject(std::move(driver_objects()[i]), transport,
263                                 *this, data, transmissible_handles);
264     GetArrayView<internal::DriverObjectData>(array_offset)[i] = data;
265   }
266 
267   if (ok) {
268     transmissible_driver_handles_ = std::move(transmissible_handles);
269     return true;
270   }
271   return false;
272 }

msg::ConnectFromBrokerToNonBroker对象是通过宏展开生成的定义如下(我用gpt展开的,请参考chromium通信系统-ipcz系统(三)-ipcz-消息相关的宏展开这篇文章)

class ConnectFromBrokerToNonBroker : public MessageWithParams<ConnectFromBrokerToNonBroker_Params> {
 public:
  using ParamsType = ConnectFromBrokerToNonBroker_Params;
  static_assert(sizeof(ParamsType) % 8 == 0, "Invalid size");
  static constexpr uint8_t kId = 0;
  static constexpr uint32_t kVersion = 0;
  ConnectFromBrokerToNonBroker();
  ~ConnectFromBrokerToNonBroker();
  bool Deserialize(const DriverTransport::RawMessage& message, const DriverTransport& transport);
  bool DeserializeRelayed(absl::Span<const uint8_t> data, absl::Span<DriverObject> objects);

  static constexpr internal::ParamMetadata kMetadata[] = {
    {offsetof(ParamsType, broker_name), sizeof(ParamsType::broker_name), 0, internal::ParamType::kData},
    {offsetof(ParamsType, receiver_name), sizeof(ParamsType::receiver_name), 0, internal::ParamType::kData},
    {offsetof(ParamsType, protocol_version), sizeof(ParamsType::protocol_version), 0, internal::ParamType::kData},
    {offsetof(ParamsType, num_initial_portals), sizeof(ParamsType::num_initial_portals), 0, internal::ParamType::kData},
    {offsetof(ParamsType, buffer), sizeof(ParamsType::buffer), 0, internal::ParamType::kDriverObject},
    {offsetof(ParamsType, padding), sizeof(ParamsType::padding), 0, internal::ParamType::kData},
  };
};

msg::ConnectFromBrokerToNonBroker 继承自MessageWithParams,MessageWithParams又继承自Message。对msg::ConnectFromBrokerToNonBroker的序列化主要是将driver_objects 这些对象转化为内存数据,通过跨进程传输给对端进程,driver object中可能包含文件描述符,这些描述符要单独传出(非内存数据)。 对端进程收到后要能反序列化还原为driver object对象。最后msg::ConnectFromBrokerToNonBroker 序列化后内存布局如下:
chromium通信系统-ipcz系统(五)-ipcz系统代码实现-信道和共享内存_第8张图片

最开始的部分为Header,基本上是固定数据,params是消息对象的参数,在msg::ConnectFromBrokerToNonBroker中为:broker_name、receiver_name、protocol_version、num_initial_portals、buffer、padding这些数据。

 57     msg::ConnectFromBrokerToNonBroker connect;
 58     connect.params().broker_name = broker_name_;
 59     connect.params().receiver_name = new_remote_node_name_;
 60     connect.params().protocol_version = msg::kProtocolVersion;
 61     connect.params().num_initial_portals =
 62         checked_cast<uint32_t>(num_portals());
 63     connect.params().buffer = connect.AppendDriverObject(
 64         link_memory_allocation_.memory.TakeDriverObject());
 65     connect.params().padding = 0;

上述这两部分为序列化之前的内存部分。 序列化后,会生成两部分信息,一部分是内存数据,用于描述driver object的序列化信息,另一部分是要传输的文件描述符数据。文件描述符保存在transmissible_handles中传出。 params后面就是driver_objects_序列化后的数据。 每个driver object对应一个DriverObjectData,DriverObjectData描述了该driver object 序列出来的文件描述符在transmissible_handles的坐标(first_driver_handle)和个数(num_driver_handles),还描述出了driver object 序列化出来的内存数据的偏移。

我们以DeviceMemory保存的driver object举例来看一下是如何序列化的(它的handle指向SharedBuffer)。

 33 // DriverObjectData.
 34 bool SerializeDriverObject(
 35     DriverObject object,
 36     const DriverTransport& transport,
 37     Message& message,
 38     internal::DriverObjectData& data,
 39     absl::InlinedVector<IpczDriverHandle, 2>& transmissible_handles) {
 40   if (!object.is_valid()) {
 41     // This is not a valid driver handle and it cannot be serialized.
 42     data.num_driver_handles = 0;
 43     return false;
 44   }
 45   
      // 获取driver object序列化需要多少内存,可以序列化出来几个文件描述符
 46   uint32_t driver_data_array = 0;
 47   DriverObject::SerializedDimensions dimensions =
 48       object.GetSerializedDimensions(transport);
 49   if (dimensions.num_bytes > 0) {
        // 根据需要分配内存
 50     driver_data_array = message.AllocateArray<uint8_t>(dimensions.num_bytes);
 51   }
 52   
      // 填充DriverObjectData数据结构(第一个文件描述符位置,和包含文件描述符个数,以及反序列化后内存数据的偏移)
 53   const uint32_t first_handle =
 54       static_cast<uint32_t>(transmissible_handles.size());
 55   absl::Span<uint8_t> driver_data =
 56       message.GetArrayView<uint8_t>(driver_data_array);
 57   data.driver_data_array = driver_data_array;
 58   data.num_driver_handles = dimensions.num_driver_handles;
 59   data.first_driver_handle = first_handle;
 60 
 61   transmissible_handles.resize(transmissible_handles.size() +
 62                                dimensions.num_driver_handles);
 63 
 64   auto handles_view = absl::MakeSpan(transmissible_handles);
      // 序列化
 65   if (!object.Serialize(
 66           transport, driver_data,
 67           handles_view.subspan(first_handle, dimensions.num_driver_handles))) {
 68     return false;
 69   }
 70 
 71   return true;
 72 }

46-52行 调用object.GetSerializedDimensions(transport)获取序列化需要多少内存,并申请内存。 需要多少文件描述符,用于检查参数是否合法。
53-60行 填充DriverObjectData数据结构(指向的文件描述符在transmissible_handles中位置,和包含文件描述符个数,以及反序列化后内存数据的偏移)
61 扩充transmissible_handles
65 序列化driver object

third_party/ipcz/src/ipcz/driver_object.cc

bool DriverObject::Serialize(const DriverTransport& transport,
                             absl::Span<uint8_t> data,
                             absl::Span<IpczDriverHandle> handles) {
  size_t num_bytes = data.size();
  size_t num_handles = handles.size();
  IpczResult result = driver_->Serialize(
      handle_, transport.driver_object().handle(), IPCZ_NO_FLAGS, nullptr,
      data.data(), &num_bytes, handles.data(), &num_handles);
  if (result == IPCZ_RESULT_OK) {
    release();
    return true;
  }
  return false;
}

Serialize 函数调用driver_->Serialize()函数进行序列化, data和handles为传出参数
mojo/core/ipcz_driver/driver.cc

IpczResult IPCZ_API Serialize(IpczDriverHandle handle,
                              IpczDriverHandle transport_handle,
                              uint32_t flags,
                              const void* options,
                              volatile void* data,
                              size_t* num_bytes,
                              IpczDriverHandle* handles,
                              size_t* num_handles) {
  ObjectBase* object = ObjectBase::FromHandle(handle);
  Transport* transport = Transport::FromHandle(transport_handle);
  if (!object || !object->IsSerializable()) {
    return IPCZ_RESULT_INVALID_ARGUMENT;
  }

  if (!transport) {
    return IPCZ_RESULT_ABORTED;
  }

  // TODO(https://crbug.com/1451717): Propagate the volatile qualifier on
  // `data`.
  const IpczResult result = transport->SerializeObject(
      *object, const_cast<void*>(data), num_bytes, handles, num_handles);
  if (result != IPCZ_RESULT_OK) {
    return result;
  }

  // On success we consume the object reference owned by the input handle.
  std::ignore = ObjectBase::TakeFromHandle(handle);
  return IPCZ_RESULT_OK;
}

driver 调用transport->SerializeObject() 序列化对象。
mojo/core/ipcz_driver/transport.cc

378 IpczResult Transport::SerializeObject(ObjectBase& object,
379                                       void* data,
380                                       size_t* num_bytes,
381                                       IpczDriverHandle* handles,
382                                       size_t* num_handles) {
383   size_t object_num_bytes;
384   size_t object_num_handles;
      // 检查给定的内存和handles容量是否能支持序列化
385   if (!object.GetSerializedDimensions(*this, object_num_bytes,
386                                       object_num_handles)) {
387     return IPCZ_RESULT_INVALID_ARGUMENT;
388   }
389 
      .....
417   auto& header = *static_cast<ObjectHeader*>(data);
418   header.size = sizeof(header);
419   header.type = object.type();
      ......
440   auto object_data = base::make_span(reinterpret_cast<uint8_t*>(&header + 1),
441                                      object_num_bytes);
442 #endif
443 
444   // A small amount of stack storage is reserved to avoid heap allocation in the
445   // most common cases.
446   base::StackVector<PlatformHandle, 2> platform_handles;
447   platform_handles->resize(object_num_handles);
448   if (!object.Serialize(*this, object_data,
449                         base::make_span(platform_handles.container()))) {
450     return IPCZ_RESULT_INVALID_ARGUMENT;
451   }
452 
453   bool ok = true;
454   for (size_t i = 0; i < object_num_handles; ++i) {
      ......
459     handles[i] = TransmissiblePlatformHandle::ReleaseAsHandle(
460         base::MakeRefCounted<TransmissiblePlatformHandle>(
461             std::move(platform_handles[i])));
462 #endif
463   }
464   return ok ? IPCZ_RESULT_OK : IPCZ_RESULT_INVALID_ARGUMENT;
465 }

385-386行检查传出参数是否合法,是否足够存放序列化数据. 348行调用对象序列化, 这里获取到的handle为PlatformHandle,也就是文件描述符。
454-461行将Platform转化为IpczDriverHandle放到传出参数handles中。 这里序列化的内存数据包含一个ObjectHeader,ObjectHeader的成员变量type标识了driver object的类型。ipcz driver 支持的类型如下:

  • kTransport: 传输点
  • kSharedBuffer:共享内存(文件描述副)
  • kSharedBufferMapping 映射内存
  • kTransmissiblePlatformHandle: 可传输的对象
  • kWrappedPlatformHandle: 包装的PlatformHandle
  • kDataPipe: 数据通道
  • kMojoTrap: trap对象
  • kInvitation: 邀请对象

这里的type是kSharedBuffer。
我们直接分析SharedBuffer的序列化过程吧
mojo/core/ipcz_driver/shared_buffer.cc

188 bool SharedBuffer::Serialize(Transport& transmitter,
189                              base::span<uint8_t> data,
190                              base::span<PlatformHandle> handles) {
      ......
196   BufferHeader& header = *reinterpret_cast<BufferHeader*>(data.data());
197   header.size = sizeof(header);
198   header.buffer_size = static_cast<uint32_t>(region_.GetSize());
199   header.padding = 0;
200   switch (region_.GetMode()) {
201     case base::subtle::PlatformSharedMemoryRegion::Mode::kReadOnly:
202       header.mode = BufferMode::kReadOnly;
203       break;
204     case base::subtle::PlatformSharedMemoryRegion::Mode::kWritable:
205       header.mode = BufferMode::kWritable;
206       break;
207     case base::subtle::PlatformSharedMemoryRegion::Mode::kUnsafe:
208       header.mode = BufferMode::kUnsafe;
209       break;
210   }
211   base::UnguessableToken guid = region_.GetGUID();
212   header.guid_low = guid.GetLowForSerialization();
213   header.guid_high = guid.GetHighForSerialization();
214 
215   auto handle = region_.PassPlatformHandle();
      ......
221   if (header.mode == BufferMode::kWritable) {
222     DCHECK_EQ(2u, handles.size());
223     handles[0] = PlatformHandle(std::move(handle.fd));
224     handles[1] = PlatformHandle(std::move(handle.readonly_fd));
225   } else {
226     DCHECK_EQ(1u, handles.size());
227     handles[0] = PlatformHandle(std::move(handle.fd));
228   }
229 #endif
230 
231   return true;
232 }
233 

SharedBuffer 序列化将共享内存的大小、打开模式等信息放到了data内存里面。 并且将文件描述符放到了handles里面。这样b进程拿到这些信息后,重新执行mmap映射到自己的内存空间,就能和a进程共享内存了。

好了。我们继续看链接请求吧。
mojo/core/ipcz_driver/transport.cc

338 bool Transport::Transmit(base::span<const uint8_t> data,
339                          base::span<const IpczDriverHandle> handles) {
     ......
     // 又转成文件描述副
347   std::vector<PlatformHandle> platform_handles;
348   platform_handles.reserve(handles.size());
349   for (IpczDriverHandle handle : handles) {
350     auto transmissible_handle =
351         TransmissiblePlatformHandle::TakeFromHandle(handle);
352     DCHECK(transmissible_handle);
353     platform_handles.push_back(transmissible_handle->TakeHandle());
354   }
355 
356   scoped_refptr<Channel> channel;
357   {
358     base::AutoLock lock(lock_);
      ......
370     channel = channel_;
371   }
372 
373   channel->Write(
374       Channel::Message::CreateIpczMessage(data, std::move(platform_handles)));
375   return true;
376 }
377 

373-374行将链接消息转化为mojo消息,然后调用channel->Write()将消息写出去。

mojo/core/channel_linux.cc

621 void ChannelLinux::Write(MessagePtr message) {
622   if (!shared_mem_writer_ || message->has_handles() || reject_writes_) {
623     // Let the ChannelPosix deal with this.
624     return ChannelPosix::Write(std::move(message));
625   }
     ......
649 }

mojo/core/channel_posix.cc
148 void ChannelPosix::Write(MessagePtr message) {
      ......
155   bool write_error = false;
156   {
157     base::AutoLock lock(write_lock_);
158     if (reject_writes_)
159       return;
160     if (outgoing_messages_.empty()) {
161       if (!WriteNoLock(MessageView(std::move(message), 0)))
162         reject_writes_ = write_error = true;
163     } else {
164       outgoing_messages_.emplace_back(std::move(message), 0);
165     }
166   }
      ......
174 }

调用WriteNoLock写出数据
mojo/core/channel_posix.cc

334 bool ChannelPosix::WriteNoLock(MessageView message_view) {
335   size_t bytes_written = 0;
336   std::vector<PlatformHandleInTransit> handles = message_view.TakeHandles(); // 包含的文件描符个数
337   size_t num_handles = handles.size();
338   size_t handles_written = message_view.num_handles_sent(); // 已经发送的文件描述符个数
339   do {
340     message_view.advance_data_offset(bytes_written); // 已经发送的数据
341 
342     ssize_t result;
343     if (handles_written < num_handles) {
344       iovec iov = {const_cast<void*>(message_view.data()),
345                    message_view.data_num_bytes()};
346       size_t num_handles_to_send =
347           std::min(num_handles - handles_written, kMaxSendmsgHandles);
348       std::vector<base::ScopedFD> fds(num_handles_to_send);
349       for (size_t i = 0; i < num_handles_to_send; ++i)
             // 收集要发送的文件描述符
350         fds[i] = handles[i + handles_written].TakeHandle().TakeFD(); 
351       // TODO: Handle lots of handles.
          // 发送文件描述符
352       result = SendmsgWithHandles(socket_.get(), &iov, 1, fds);
353       if (result >= 0) { 
            // 发送成功
          ......
375         handles_written += num_handles_to_send;
376         DCHECK_LE(handles_written, num_handles);
377         message_view.set_num_handles_sent(handles_written);
378       } else { 
            // 发送失败
379         // Message transmission failed, so pull the FDs back into |handles|
380         // so they can be held by the Message again.
381         for (size_t i = 0; i < fds.size(); ++i) {
382           handles[i + handles_written] =
383               PlatformHandleInTransit(PlatformHandle(std::move(fds[i])));
384         }
385       }
386     } else {
          // 发送数据
387       result = SocketWrite(socket_.get(), message_view.data(),
388                            message_view.data_num_bytes());
389     }
390 
391     if (result < 0) {
          // 发送失败
392       if (errno != EAGAIN &&
393           errno != EWOULDBLOCK
          ......
410       ) {
            // 非EAGAIN和EWOULDBLOCK为致命错误,直接返回false断开链接
411         return false;
412       }
          // 重试或者阻塞错误,消息发送了一半,将消息放到outgoing_messages_中,等待文件描述符可写后继续写,并返回true
413       message_view.SetHandles(std::move(handles));
414       outgoing_messages_.emplace_front(std::move(message_view));
415       WaitForWriteOnIOThreadNoLock();
416       return true;
417     }
418 
419     bytes_written = static_cast<size_t>(result);
420   } while (handles_written < num_handles ||
421            bytes_written < message_view.data_num_bytes());
422  
      // 消息写完了,看看有没有其他消息要写
423   return FlushOutgoingMessagesNoLock();
424 }

WriteNoLock的逻辑比较简单,将文件描述符利用sendmsg写出去,再用send将数据写出去。

到这里a进程请求建立NodeLink的过程我们已经分析完成了, 这里将共享内存的文件描述符发送出去了,并且将序列化数据也发送出去了。我们继续分析b进程收到数据如何处理。

分析信道的时候知道b进程在等待建立NodeLink的时候收到消息会调用Transport->OnChannelMessage()方法,我们现在就来分析它.
mojo/core/ipcz_driver/transport.cc

void Transport::OnChannelMessage(const void* payload,
                                 size_t payload_size,
                                 std::vector<PlatformHandle> handles) {
  std::vector<IpczDriverHandle> driver_handles(handles.size());
  for (size_t i = 0; i < handles.size(); ++i) {
    driver_handles[i] = TransmissiblePlatformHandle::ReleaseAsHandle(
        base::MakeRefCounted<TransmissiblePlatformHandle>(
            std::move(handles[i])));
  }

  const IpczResult result = activity_handler_(
      ipcz_transport_, static_cast<const uint8_t*>(payload), payload_size,
      driver_handles.data(), driver_handles.size(), IPCZ_NO_FLAGS, nullptr);
  if (result != IPCZ_RESULT_OK && result != IPCZ_RESULT_UNIMPLEMENTED) {
    OnChannelError(Channel::Error::kReceivedMalformedData);
  }
}

这里调用activity_handler_回调函数通知上层,函数的参数为收到的数据和数据大小, 然后是收到的文件描述符和描述符个数。activity_handler_函数为DriverTransport->NotifyTransport函数.(从ipcz driver层到ipcz层) 。

IpczResult IPCZ_API NotifyTransport(IpczHandle listener,
                                    const void* data,
                                    size_t num_bytes,
                                    const IpczDriverHandle* driver_handles,
                                    size_t num_driver_handles,
                                    IpczTransportActivityFlags flags,
                                    const void* options) {
  DriverTransport* t = DriverTransport::FromHandle(listener);
.......

  if (!t->Notify({absl::MakeSpan(static_cast<const uint8_t*>(data), num_bytes),
                  absl::MakeSpan(driver_handles, num_driver_handles)})) {
    return IPCZ_RESULT_INVALID_ARGUMENT;
  }

  return IPCZ_RESULT_OK;
}

函数用DriverTransport的Notifiy方法。进而调用listener->OnTransportMessage 方法,在建立链接过程中listener为NodeConnector对象,对于b进程非broker node,对应的Connector对象为NodeConnectorForNonBrokerToBroker。 NodeConnectorForNonBrokerToBroker继承自NodeConnector, NodeConnector又继承自NodeMessageListener, NodeMessageListener 通过宏展开生成,代码依旧参考chromium通信系统-ipcz系统(三)-ipcz-消息相关的宏展开这篇文章。

bool NodeMessageListener::OnTransportMessage(
    const DriverTransport::RawMessage& raw_message,
    const DriverTransport& transport) {
  if (raw_message.data.size() < sizeof(internal::MessageHeaderV0)) {
    return false;
  }
  const auto& header =
      *reinterpret_cast<const internal::MessageHeaderV0*>(
          raw_message.data.data());
  switch (header.message_id) {
    case ConnectFromBrokerToNonBroker::kId: {
      ConnectFromBrokerToNonBroker message(Message::kIncoming);
      if (!message.Deserialize(raw_message, transport)) {
        return false;
      }
      return OnMessage(message);
    }
  ......
  }
}

bool NodeMessageListener::OnMessage(Message& message) {
  return DispatchMessage(message);
}

bool NodeMessageListener::DispatchMessage(Message& message) {
    switch (message.header().message_id) {
      ......
      case msg::ConnectFromBrokerToNonBroker::kId:
        return OnConnectFromBrokerToNonBroker(static_cast<msg::ConnectFromBrokerToNonBroker&>(message));
   ......
  }

函数先调用ms::ConnectFromBrokerToNonBroker的反序列化函数,将数据和文件描述符转化为msg::ConnectFromBrokerToNonBroker 对象。然后调用OnMessage函数, 反序列化和序列化过程正好相反,这里就不深入分析了。最终会调用到NodeConnectorForNonBrokerToBroker::OnConnectFromBrokerToNonBroker() 函数,我们来看看它的实现。

third_party/ipcz/src/ipcz/node_connector.cc

116   // NodeMessageListener overrides:
117   bool OnConnectFromBrokerToNonBroker(
118       msg::ConnectFromBrokerToNonBroker& connect) override {
        ......
        // mmap内存映射
123     DriverMemoryMapping mapping =
124         DriverMemory(connect.TakeDriverObject(connect.params().buffer)).Map();
125     if (!mapping.is_valid()) {
126       return false;
127     }
128 
        // 建立NodeLink
129     auto new_link = NodeLink::CreateActive(
130         node_, LinkSide::kB, connect.params().receiver_name,
131         connect.params().broker_name, Node::Type::kBroker,
132         connect.params().protocol_version, transport_,
133         NodeLinkMemory::Create(node_, std::move(mapping)));
134     if ((flags_ & IPCZ_CONNECT_NODE_TO_ALLOCATION_DELEGATE) != 0) {
135       node_->SetAllocationDelegate(new_link);
136     }
137 
138     AcceptConnection({.link = new_link, .broker = new_link},
139                      connect.params().num_initial_portals);
140     return true;
141   }

DriverMemory 我们已经分析了,将文件映射到内存中,这样就完成了共享内存的建立。
129-133行建立NodeLink。 这样链接就基本建立了。

我们下一篇文章来分析链接建立,以及代理和代理绕过。

小结:

总结起来就是太繁琐了,对链接和共享内存层层包装。。。。。

你可能感兴趣的:(chromium,chrome,chromium,mojo)