MPI并行编程学习——Send&Recv、消息发送接收函数

消息发送函数:

原型为 int MPI_Send(void *buf, int count, MPI_Datatype datatype, int goal, int tag, MPI_Comm comm)

参数意义:buf为消息的地址,count是内容的数量,datatype为消息内容的数据类型,goal为目标进程编,tag为消息的标志,comm为通信域

此函数将buf里的count个类型为datatype的数据发送到进程编号为goal的接收函数里,标志为goal,通信域通常为MPI_COMM_WORLD


消息接收函数:

原型为int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm,MPI_Status*status)

参数意义:buf为储存消息的地址,count是接收的数量,datatype是消息内容的数据类型,source为接收来源的进程编号,tag是消息的标志,comm是通信域,status是接收状态

tag必须与发送函数的tag一致,comm为接收函数与发送函数两者的进程所在的通信域,status是返回状态


代码例子:

// Author: Wes Kendall
// Copyright 2011 www.mpitutorial.com
// This code is provided freely with the tutorials on mpitutorial.com. Feel
// free to modify it for your own use. Any distribution of the code must
// either provide a link to www.mpitutorial.com or keep this header intact.
//
// MPI_Send, MPI_Recv example. Communicates the number -1 from process 0
// to process 1.
//
#include 
#include 
#include 

int main(int argc, char** argv) {
  // Initialize the MPI environment
  MPI_Init(NULL, NULL);
  // Find out rank, size
  int world_rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
  int world_size;
  MPI_Comm_size(MPI_COMM_WORLD, &world_size);

  // We are assuming at least 2 processes for this task
  if (world_size < 2) {
    fprintf(stderr, "World size must be greater than 1 for %s\n", argv[0]);
    MPI_Abort(MPI_COMM_WORLD, 1);
  }

  int number;
  if (world_rank == 0) {
    // If we are rank 0, set the number to -1 and send it to process 1
    number = -1;
    MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
  } else if (world_rank == 1) {
    MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
    printf("Process 1 received number %d from process 0\n", number);
  }
  MPI_Finalize();
  return 0;
}

这个程序在编号为“0”的进程中发送“-1”到编号为“1”的进程中

number = -1;
MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);

从左到右解释参数:发送的内容为number,取number的地址,发送的数量为1,类型为MPI_INT,目标进程编号为1,标志为0,通信域为MPI_COMM_WORLD

接收函数同理

你可能感兴趣的:(并行计算)