next previous contents
Next: Timing Programs Previous: MPI Indispensable Functions

A Simple MPI Program - hello.c

Consider this demo program:

/*The Parallel Hello World Program*/
#include <stdlib.h>
#include <stdio.h>
#include <mpi.h>

main(int argc, char **argv)
   int node;
   MPI_Comm_rank(MPI_COMM_WORLD, &node);
   printf("Hello World from Node %d\n", node);

In a nutshell, this program sets up a communication group of processes, where each process gets its rank, prints it, and exits. It is important for you to understand that in MPI, this program will start simultaneously on all machines. For example, if we had ten machines, then running this program would mean that ten separate instances of this program would start running together on ten different machines. This is a fundamental difference from ordinary C++ programs, where, when someone said ``run the program", it was assumed that there was only one instance of the program running.

The first two lines, should be familiar to all C programmers. They include the standard library and input/output routines like printf.

The second line,

#include <mpi.h>
includes the MPI functions. The file mpi.h contains prototypes for all the MPI routines in this program; this file is located in /usr/local/appl/mpich/include/mpi.h in case you actually want to look at it.

The program starts with the main... line which takes the usual two arguments argc and argv, and the program declares one integer variable, node. The first step of the program,


calls MPI_Init to initialize the MPI environment, and generally set up everything. This should be the first command executed in all programs. This routine takes pointers to argc and argv, looks at them, pulls out the purely MPI-relevant things, and generally fixes them so you can use command line arguments as normal.

Next, the program runs MPI_Comm_rank, passing it an address to node.

      MPI_Comm_rank(MPI_COMM_WORLD, &node);

MPI_Comm_rank will set node to the rank of the machine on which the program is running. Remember that in reality, several instances of this program start up on several different machines when this program is run. These processes will each receive a unique number from MPI_Comm_rank.

Because the program is running on multiple machines, each will execute not only all of the commands thus far explained, but also the print the hello world message, which includes their own rank.

If the program is run on ten computers, printf is called ten times on ten different machines in parallel. The order in which each process executes the message is undetermined, based on when they each reach that point in their execution of the program, and how they travel on the network. Your guess is as good as mine. So, the ten messages will get dumped to your screen in some undetermined order, such as:

Hello World from Node 2
Hello World from Node 0
Hello World from Node 4
Hello World from Node 9
Hello World from Node 3
Hello World from Node 8
Hello World from Node 7
Hello World from Node 1
Hello World from Node 6
Hello World from Node 5

Note that all the printf's, though they come from different machines, will send their output intact to your shell window; this is generally true of output commands. Input commands, like getchar and scanf, will only work on the process with rank zero.

After doing everything else, the program calls MPI_Finalize, which generally terminates everything and shuts down MPI. This should be the last command executed in all programs.

Other sample MPI programs are in the directory ~aca319/labs/mpi-examples/