If solver_type is set to -diagonal then only the main diagonal of the system matrix will be used for the solution of all unknowns. This gives the program an explicit like structure. In fact, if control_timestep_iterations is set to 1, then a classical explicit finite element program is obtained.
If solver_type is set to -matrix_iterative_bicg then the complete system matrix will be used for solution of the principal unknowns (see the initialization section for an explanation on principal unknowns). A diagonal Preconditioned BiConjugate Gradient method is applied.
If solver_type is set to -matrix_iterative_petsc then the complete system matrix will be used for solution of the principal unknowns (see the initialization section). The PETSc library with iterative solvers is applied. See also control_options_solver_petsc_ksptype and control_options_solver_petsc_pctype. The PETSc solver is meant for expert users only. See tochnog/src/makefile how to link the PetSc solvers to Tochnog. The PETSc library is developed by Satish Balay, William Gropp, Lois Curfman McInnes and Barry Smith (Mathematics and Computer Science Division, Argonne National Laboratory).
If solver_type is set to -matrix_superlu then the complete system matrix will be used for solution of the principal unknowns (see the initialization section). The SuperLu library with a sequential direct solvers is applied (LU decomposition). The SuperLu solver is meant for expert users only. Also available are -matrix_superlu_mt (multithreads based parallel version of SuperLu) and -matrix_superlu_dist (MPI based distributed parallel version of SuperLu). See tochnog/src/makefile how to link the SuperLu solvers to Tochnog. The SuperLu library is developed by James W. Demmel (Computer Science Division, University of California, Berkeley), John R. Gilbert (Xerox Palo Alto Research Center) and Xiaoye S. Li (National Energy Research Scientific Computing Center, NERSC). Beware: if the equations are singular, if you forget boundary conditions or so, superlu tends to give strange problems (segmentation faults or so).
In all cases, the first iteration will give estimates for velocities, displacements, etc. Typically you need at least two iterations to get estimates for strains, stresses, etc.
Even if the equations are linear, using more iterations may change the results. This is because, the strain, stresses, etc. are redistributed due to the continuous field approach. Typically the results become more accurate when using more iterations; but using two or three iterations is ok in most cases.
Even in case of plasticity, damage, or so we use the linear elastic stiffness matrix in both the diagonal solver and the matrix solvers. This gives a stable iteration behavior but requires that enough time steps and/or iterations are used.
If solver_type is set to -none then only the matrices and right-hand sides are setup, but the equations are not really solved.
For the multithreaded parallel solvers use options_processors to set the numbers of processors that you want to use.
Default we use -matrix_iterative_bicg.