Skip to content


Large-scale calculations with FHI-aims are really, conceptually, quite simple. They work just as smaller-scale calculations do:

  • Create appropriate input files and
  • Run FHI-aims, preferably in parallel, using MPI ("Message Passing Interface") communication, that is, use mpirun, srun, llrun, orterun, etc. (depending on your environment) to run the executable aims.<version>.scalapack.mpi.x in parallel.
  • If the code ran successfully, analyze the contents of the output files, e.g., using GIMS.

What is different is that

  • the computer environment on which we operate is different to a laptop, and
  • the computational demands (runtime, but critically, memory demands) can be drastically different from a computation that runs on, say, a laptop computer.

The latter two points, especially the computer environment, are points that can be drastically underestimated when beginning to use FHI-aims on a large supercomputer for demanding calculations.

The process to go about this is quite sane. However, it is essential to understand that it is necessary to take the time and test things carefully. That is what a good part of this tutorial is about.

Why benchmarking?

To find out whether the installation of the FHI-aims executable is optimal on your supercomputer, it is necessary to run benchmarks. By running benchmarks, you can compare your new installation to older, well-understood installations on other supercomputers (timings). A benchmark should (at least) cover the following aspects:

  1. Reproducing the outcome of a simulation: Are the total energy or other physical observables the same? Does my calculation have the same convergence behavior (e.g. the same number of SCF steps)?
  2. Comparing the timings: For the same number of MPI tasks do I get a similar timing?
  3. Check the Scaling: Does my simulation scale with the number of MPI tasks? (E.g. when doubling the number of MPI tasks I approximately need only half the time.)
  4. Finding the optimal runtime parameters: FHI-aims is shipped with default runtime parameters for most of the keywords. Some of them are even optimized at runtime. However, the defaults my not perform optimal for all circumstance and testing some of them – especially those affecting the SCF convergence – may worth a try.

You should test the first three points on a new computer before running production calculations. It will help you to get reliable simulation results and to avoid the waste of (often expensive) computation resources. The last point is worth testing if you plan to very expensive calculations.

In the following we will introduce how to address the above four points in practices.