[SOLVED] COMP5426 Distributed

$25

File Name: COMP5426_Distributed.zip
File Size: 188.4 KB

5/5 - (1 vote)

COMP5426 Distributed
Distributed M

dard defines

Copyright By Assignmentchef assignmentchef

MPI defines a standard library for message-passing that can be used to develop portable message- passing programs using either C or Fortran
the semantics of a core set of library routines
both the syntax
ations of MPI are availabl
cial paralle
l computers

Subroutines for Communication
No locks because
e or point-to-point: Send and Receive
Collectives all processor get together to
Move data: Broadcast, Scatter/gather
Compute and move: sum, product, max, prefix sum, of data on many processors
Synchronization
many processes? Which one
variables to
? Any messages

Standard itself:
All MP HTML
I official r
Other information
http://www.mcs.anl
eleases, in
.gov/research/pr
both postscript and
ojects/mpi
pointers tutorials
to lots of stuff, including , a FAQ, other MPI page

MPI_Finalize
MPI_Comm_size
The minimal set
MPI_Comm_rank Determines t

Passing Interface
Determines the nu
Sends a message.
Receives a
of processes.
of calling

MPI_Init is called
purpose is to initialize the
MPI_Finalize is called at the end of the computat performs various clean-up tasks to terminate the environment
The prototypes of
int MPI_Init(int *argc, cha
int MPI_Finalize()
MPI_Init a
any calls to other
MPI environment
lso strips off any MPI related command-l
All MPI routines, data-types, MPI_. The return code for MPI_SUCCESS
and constants are prefixed successful completion is
ion, a MPI

A communicator defines set of processes that ar with each other
e transfer
ion about communic
in variables of type MPI_Comm Communicators are used as arguments to all
A process can belong to many dif overlapping) communication doma
MPI defines a default
MPI_COMM_WORLD processes
a communication domain e allowed to communicate
PI routines
ation doma
ins is stored
ferent (possibly ins
municator called
which includes all the

The calling
int MPI_Co
up to the size of
s and data
Information
The MPI_Comm_size and MPI_Comm_rank func are used to determine the number of processes the label of the calling process, respectively
(MPI_Comm co
int MPI_Comm_rank(MPI_Comm comm, int *rank) The rank of a process is an integer that ranges from
SPMD Single Program and Multiple Data
All processes run the same program on different
Processs
mmunicator minus one
id) is esse

#include
main(int argc, char *argv[])
int npes, myrank name_len;
char processor_name[MPI_MAX_PROCESSOR_N MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &npes); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); MPI_Get_processor_name(processor_name, &name_len); printf(Hello world from %s, rank %d/%d
,
processor_name, myrank, npes);
_Finalize();
compile the program using mpicc:
mpicc o myprog myprog.c
run the program using mpirun:
mpirun np 5 myprog

The basic fun
MPI are the MPI_Send
int MPI_Recv(void
Data are presented b
MPI provides equivalent datatypes for all
is done for
portability reasons
and MPI_Recv, respectively
quences of these routines are as fol
int MPI_Send(void *buf, int count,
MPI_Datatype datatype, int MPI_Comm comm)
*buf, int coun
y triplet {buf, count,
The dest/source is receive/send processs rank in a
communicator comm (default
t, MPI_Dat
int source, int tag, MPI_Comm comm,
MPI_Status
is MPI_COMM_WORLD)
and receiving messages in
C datatypes.

MPI_SHORT MPI_INT MPI_LONG
MPI_UNSIGNED
MPI_UNSIGNED
MPI_UNSIGNED
MPI_UNSIGNED
MPI_DOUBLE MPI_LONG_DOUBLE MPI_BYTE
MPI_PACKED
C Datatype
signed short signed int signed long int
double long doub
ed long int

The message-tag can take
If ta any t
ecification
Messages are sent with an accompanying user-defined integer tag, to assist the receiving process in identifying the message
to the MPI defined constant MPI_TAG_UB
If source is set to MPI_ANY_SOURCE, then any process of the communication domain can be the
g is set to MPI_ANY_T ag are accepted
ranging from z
of wildcard arguments fo
then messages

The corresponding data
typedef struct MPI_Status {
PI_SOURCE;
PI_TAG; PI_ERROR
I_Get_count(MPI_
On the receiving end, the status variable ca to get information about the MPI_Recv ope
structure contains:
et_count function returns t
count of data items received
MPI_Datatype
datatype, i
n be used ration
he precise

The process is blocked in the MP
On the receive side, the message must be of length equal to or less than the length field specified
For receives the remote data has been safely copied into the receive buffer
s the send
buffer can be safely
by the user without impacting the message transfer
I function

int a[10],
b[10], myrank;
MPI_Status status;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if (myrank == 0) {
MPI_Recv(b, 10,
MPI_Send(a, 10,
MPI_Recv(a, 10, MPI_INT, 0, 1,
MPI_Send(b, 10,
I_Recv is blockin
MPI_INT, 1, 2, M
MPI_INT, 1, 1, MPI_COMM
MPI_INT, 0, 2, MPI_COMM_WORLD);

MPI MPI MPI
In the following code, process i r from process i 1 (module the nu and sends a message to process i
number of processes)
int a[10], b[10],
_Comm_size(MPI_COMM_WORLD, &npes); _Comm_rank(MPI_COMM_WORLD, &myrank); _Recv(b, 10, MPI_INT, (myrank-1+npes)%npes, 1, M
&status); _Send(a, 10, MPI
_INT, (myrank+1)%npe
have a deadlock
eceives a message mber of processes)
+ 1 (modulo the
s, 1, MPI_C
OMM_WORLD);

Avoiding D
int a[10], b[1
MPI_Status
_Comm_size(MPI_COMM_WORLD, &npes);
MPI_Comm_rank(MPI_COMM_WORLD, &myra
if (myrank%2 == 1) {
MPI_Send(a, 10, MPI_INT, (myrank+1)%npes, 1, MPI_COMM MPI_Recv(b, 10, MPI_INT, (myrank-1+npes)%npes, 1,
MPI_COMM_WORLD, &status);
MPI_Recv(b, 10, MPI_I
MPI_COMM_W
wait to avoid dea
NT, (myrank-1+npes)%npes, 1,
MPI_Send(a, 10, MPI_INT, (myrank+1)%npes, 1, MPI_COMM

Sending and Receivin
ages Simultaneo
To exchange messages, MPI provides the following
MPI_Sendrecv(void *sendbuf, int sendcount, MPI_Datatype senddatatype, int dest, int sendtag,
void *recvbuf, int recvcount, MPI_Datatype recvdatatype,
int source, int recvtag, MPI_Comm comm, MPI_Status *status)
The arguments include functions. If we wish t receive, we can use:
arguments to the send and receive
o use the same buffer for both send and
MPI_Sendrecv_replace(void *buf, int count, MPI_Datatype datatype, int dest, int sendtag, int so int recvtag, MPI_Comm comm, MPI_Status *status)

Non-Blockin
MPI Functions:
MPI_ISend,
MPI_Request *request: a so called opaque object, which identifies communication operations and
matches the operation tha
communication with the op
t initiates the
eration that termin

Non-Blockin
Functions:
MPI_Wait blocks
until operation is finish

Non-Blockin
unctions: MPI_Testsome,
*array_of_reques
MPI_Status
tsome(int incount,
int MPI_Testsome(int incount, MPI_Request *array_of_requests, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses)
ts, int *outcount, int
*array_of_statuses)
Tests or waits until at least one of the operations with active handles in the list have completed
Needs an array of requests (also indices and statuses) multiple Isend/Irecv operations
MPI_Waitso
MPI_Request
*array_of_indices,

CS: assignmentchef QQ: 1823890830 Email: [email protected]

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] COMP5426 Distributed
$25