Skip to content

Hello World cmake+openmpi

February 25, 2010

After trying out a recent build of Kdevelop4, I decided to give CMake a go. I currently have a project running utilizing MPI (parallellization libraries). This is nice and not that hard to use, but the linking of the project gets arbitrarily complicated. What helped me before when using Kdevelop3 and automake was this solution. This year around the solution is so simple, I figured I’d share with the world how to write a Hello World program utilizing OpenMPI and CMake. I will not cover the installation of those tools, that you have to find out yourself since it depends on what OS you use.

I have to say that Kdevelop4 looks stunning. It has a lot of tools, automatically checks headers included in the project for functions etc, has semantic code control, wonderful integration with CMake, and generally looks like pure heaven for a programmer. At the moment it does crash every now and then, but then again it is not a finished product. I’m sure I could write ten pages about how good it is, but that is for another post when I have tested it a bit more thoroughly..

Now, first the C++ code, which initializes the MPI, and then prints Hello World from each parallell process + the time it has used to get there since it was initialized. I have named the file "main.cpp"

#include <iostream>
#include <mpi.h>

static int numprocs;

int main(int argc, char **argv) {
int my_rank;
// MPI initializations
MPI_Status status;
MPI_Init (&argc, &argv);
MPI_Comm_size (MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank (MPI_COMM_WORLD, &my_rank);
double time_start = MPI_Wtime();
std::cout << "Hello World, my rank is " << my_rank <<" "<< MPI_Wtime() - time_start << std::endl;
// End MPI
MPI_Finalize ();
return 0;
}

Now if you find this complicated then have a look at some tutorial for MPI. You can basically consider the four lines of MPI initializations a black hole, then you have a timer which starts, and the Hello World message. At the end of the code it is important to finalize MPI, otherwise you get an error message. Anyway, this is not the important part, because it is CMake that makes my life THAT much easier. Have a look at the CMake file:

PROJECT(test)
ADD_EXECUTABLE(test main.cpp)
SET(CMAKE_C_COMPILER mpicc)
SET(CMAKE_CXX_COMPILER mpicxx)
target_link_libraries(test mpi)

That’s it! The first line defines the name of the project. Next, we define the name of the executable to be "test", and the files that executable depends on, "main.cpp". After that we define the C compilers because MPI code should be compiled with these special compilers which are basically the gcc with something extra (I consider them black holes). Finally, we need to link the project called "test" to the library called "MPI", which is done in the last line. Simple as that!

Afterwards you create another folder, e.g. "build", inside your project. Then you generate the makefile with "cmake ../" (where ../ denotes the location of your project), and compile/link with "make". The code can be ran with the command "mpirun -np 2 ./test", where 2 is the number of processes. Et voila!

Update 24. of May: So I have noticed that quite a few people have clicked on this post. I figured I wanted to mention (after learning quite a bit more CMake), that in principle I did not use much of the features of CMake in this example. Hence I wanted to add an alternative CMake script that better exemplifies the power of CMake. Consider the following code for CMakeLists.txt:

cmake_minimum_required(VERSION 2.8)

project(mytest)

add_executable(mytest main.cpp)

# Require MPI for this project:
find_package(MPI REQUIRED)
set(CMAKE_CXX_COMPILE_FLAGS ${CMAKE_CXX_COMPILE_FLAGS} ${MPI_COMPILE_FLAGS})
set(CMAKE_CXX_LINK_FLAGS ${CMAKE_CXX_LINK_FLAGS} ${MPI_LINK_FLAGS})
include_directories(MPI_INCLUDE_PATH)
target_link_libraries(mytest ${MPI_LIBRARIES})

# Add a test:
enable_testing()
add_test(SimpleTest ${MPIEXEC} ${MPIEXEC_NUMPROC_FLAG} 4 ${CMAKE_CURRENT_BINARY_DIR}/mytest)

Instead of relying on using the compiler wrappers (which depends on you knowing the precise name of these binaries), we here use the functionality find_package() in CMake. This calls an external function which will set up the environment variables you need to link to MPI. We add the word “REQUIRED” so that CMake knows that MPI is … required. The script might look more complex, but it is much more powerful. The way we set it up here we actually do it in a way that should work independent of which MPI installation one has. I also added a test demo at the bottom that runs the binary with four parallell processes. The test can be ran after “make” using the command “ctest”.

Advertisements
6 Comments leave one →
  1. Dynetrekk permalink
    February 25, 2010 8:13 pm

    Hey, this looks pretty sweet! The disadvantage is that your project now has an additional dependency to build, but that’s about it, I suppose. Anyway, if you bother installing MPI, I guess cmake is a piece of cake, right?

  2. February 25, 2010 8:25 pm

    CMake is available for just about any OS, and for most Linux distros you find it in your package manager. For OsX I’m sure you find it in macports. Not a huge issue.
    Trust me, last time around I spent half a day figuring out how to link in the correct order. It was such a mess it wasn’t even remotely funny! This is an order of magnitude more comfortable, a config file I can actually read and understand…

    I do still have a problem with netcdf and openmpi not playing nice together though, they are overriding each others macros! đŸ˜¦

  3. November 30, 2011 5:26 am

    Small correction:

    include_directories(MPI_INCLUDE_PATH)

    should be

    include_directories(${MPI_INCLUDE_PATH})

  4. lumberJack permalink
    February 4, 2013 1:17 pm

    If i use your “# Require MPI for this project: ….. ” CMakeLists.txt, valgrind says by executing this simple program

    int main()
    {
    MPI::Init();
    MPI::Finalize();
    return 0;
    }

    ==8546== LEAK SUMMARY:
    ==8546== definitely lost: 22,215 bytes in 39 blocks
    ==8546== indirectly lost: 6,967 bytes in 28 blocks
    ==8546== possibly lost: 22,593 bytes in 598 blocks
    ==8546== still reachable: 116,620 bytes in 113 blocks
    ==8546== suppressed: 0 bytes in 0 blocks
    ==8546== Rerun with –leak-check=full to see details of leaked memory

    If i use the MPI compiler wrappers, valgrind says:

    ==8583== LEAK SUMMARY:
    ==8583== definitely lost: 0 bytes in 0 blocks
    ==8583== indirectly lost: 0 bytes in 0 blocks
    ==8583== possibly lost: 0 bytes in 0 blocks
    ==8583== still reachable: 21,606 bytes in 1 blocks
    ==8583== suppressed: 0 bytes in 0 blocks
    ==8583== Rerun with –leak-check=full to see details of leaked memory

    Whats going on?

    Im using open MPI 1.6.3.

    • February 4, 2013 1:37 pm

      Quite interesting, I also get a similar memory leak. However, I get precisely the same memory leak regardless of whether I use cmake to build or the wrappers. Are you sure the wrappers are from the same MPI version as the one cmake finds on your system? You can have several installed…

      Edit: You might find more information about this issue here: http://www.open-mpi.org/faq/?category=debugging#valgrind_clean

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: