Massively Parallel Trotter-Suzuki Solver
1.6.2
|
This tutorial is about using the C++ API. If you are interested in the Python version, refer to Read the Docs.
Multicore computations
This example uses all cores on a single computer to calculate the total energy after evolving a sinusoid initial state. First we set the physical and simulation parameters of the model. We set the mass equal to one, we discretize the space in 500 lattice points in either direction, and we set the physical length to the same value. We would like to have a hundred iterations with 0.01 second between each:
The next step is the define the lattice, the state, and the Hamiltonian:
Note that the state is centered around the origin and scaled according to the grid. The centering and scaling function is exposed to the user via the function center_coordinates
.
With these objects representing the physics of the problem, we can initialize the solver:
Then we can evolve the state for the hundred iterations:
If we would like to have imaginary time evolution to approximate the ground state of the system, a second boolean parameter can be passed to the evolve
method. Flipping it to true will yield imaginary time evolution.
We can write the evolved state to a file:
If we need a series of snapshots of the evolution, say, every hundred iterations, we can loop these two steps, adjusting the prefix of the file to be written to reflect the number of evolution steps.
Finally, we can calculate the expectation value of the energies:
The following file, simple_example.cpp
, summarizes the above:
Compile it with
GPU version
If the library was compiled with CUDA support, it is enough to change a single line of code, requesting the GPU kernel when instantiating the solver class:
The compilation is the same as above. For using multiple GPUs, compile the code with MPI and launch one process for each GPU. Not all functionality is available in the GPU kernel.
Distributed version
There is very little modification required in the code to make it work with MPI. It is sufficient to initialize MPI and finalize it before returning from main
. It is worth noting that the Lattice
class keeps track of the MPI-related topology, and it also knows the MPI rank of the current process. The code for simple_example_mpi.cpp
is as follows:
Compile it with
Keep in mind that the library itself has to be compiled with MPI to make it work.
The MPI compilation disables OpenMP multicore execution in the CPU kernel, therefore you must launch a process for each CPU core you want use.