1 year ago
#374796
eisenbahnfan
Use MPI_Type_create_subarray to move column
I have a problem with sending a single column of a 2D array with MPI in C++. I found MPI_Type_create_subarray
and it seems to work fine, except for the case where I want to receive the values in the last column. If I use column index 3 in a 4x4 array, for example, I always get a SIGSEGV
.
Can anybody please help me?
I have created a small code example (and left the error checking etc. out here):
#include <iostream>
#include <mpi.h>
int main(int argc, char**argv){
double array[4][4]{
{1,1,1,1},
{1,1,1,1},
{1,1,1,1},
{1,1,1,1}
};
MPI_Init(&argc, &argv);
int numprocs, rank;
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Request request[2];
MPI_Status status[2];
int request_count = 0;
if( rank == 0 ){
// Write column 3 ...
for( int i = 0; i < 4; ++i)
array[i][3] = i;
// Send subarray, i.e. the 3rd column
MPI_Datatype column_send;
{
int array_sizes[]{4,4};
int array_subsizes[]{4,1};
int starting_points[]{0,3};
MPI_Type_create_subarray(2, array_sizes, array_subsizes, starting_points, MPI_ORDER_C, MPI_DOUBLE, &column_send);
MPI_Type_commit(&column_send);
}
ierr = MPI_Isend(
array[0],
4,
column_send,
1,
0,
MPI_COMM_WORLD,
request+request_count++
);
}
else if(rank == 1){
// Just overwrite the values here for testing ...
for( int i = 0; i < 4; ++i){
for( int j = 0; j < 4; ++j){
array[i][j] = -1;
}
}
// Receive the 3rd column
MPI_Datatype column_send;
{
int array_sizes[]{4,4};
int array_subsizes[]{4,1};
// NOTE: IF I USE {0,0}, {0,1} or {0,2} HERE, IT WORKS!
// But with {0,3}, I get a SIGSEGV.
int starting_points[]{0,3};
MPI_Type_create_subarray(2, array_sizes, array_subsizes, starting_points, MPI_ORDER_C, MPI_DOUBLE, &column_send);
MPI_Type_commit(&column_send);
}
MPI_Irecv(
array[0],
4,
column_send,
0,
0,
MPI_COMM_WORLD,
request+request_count++
);
}
MPI_Waitall(request_count, request, status);
// Print array ...
if( rank == 1 ){
for( int i = 0; i < 4; ++i ){
for( int j = 0; j < 4; ++j ){
std::cout << array[i][j] << " ";
}
std::cout << std::endl;
}
}
MPI_Finalize();
}
The output for {0,3}
is the following, while in the other cases only the array is printed:
[xxx@yyy ~]$ mpirun --mca orte_base_help_aggregate 0 -np 2 ./test
-1 -1 -1 0
-1 -1 -1 1
-1 -1 -1 2
-1 -1 -1 3
[yyy:2436997] *** Process received signal ***
[yyy:2436997] Signal: Segmentation fault (11)
[yyy:2436997] Signal code: Address not mapped (1)
[yyy:2436997] Failing at address: 0x7fc36e0fb493
...
Thank you very much in advance!
EDIT: The SIGSEGV
actually happens at the MPI_Finalize()
and not at sending or receiving.
c++
segmentation-fault
mpi
0 Answers
Your Answer