Three direct solvers and one iterative solver are available for the solution of the set of equations.
In finite element analysis, a problem is represented by a set of algebraic equations that must be solved simultaneously. There are two classes of solution methods: direct and iterative.
Direct methods solve the equations using exact numerical techniques. Iterative methods solve the equations using approximate techniques where in each iteration, a solution is assumed and the associated errors are evaluated. The iterations continue until the errors become acceptable.
The software offers the following choices:
Automatic |
The software selects the solver based
on the study type, analysis options, contact conditions, etc. Some
options and conditions apply only to either Direct Sparse or FFEPlus.
|
Direct
Sparse |
Select the Direct Sparse:
- when you have enough RAM and multiple CPUs on
your machine.
- when solving models with No Penetration
contact.
- when solving models of parts with widely
different material properties.
For every 200,000 dof, you need 1GB of RAM for linear static
analysis. The relationship between the number of equations (dofs) and
memory requirement is not linear. For the most demanding data storage
requirements (allocated size of matrices), RAM is set to be proportional
to the second power of the number of equations (dofs). |
FFEPlus
(iterative) |
The FFEPlus solver uses advanced matrix reordering
techniques that makes it more efficient for large problems. In general,
FFEPlus is faster in solving large problems and it becomes more
efficient as the problem gets larger (up to the maximum memory
available). For every 2,000,000 dof, you need 1GB
of RAM. In general, the FFEPlus solver requires less RAM than the
Direct Sparse and Intel Direct Sparse solvers.
|
Large
Problem Direct Sparse |
By leveraging enhanced
memory-allocation algorithms, the Large Problem Direct Sparse solver can
handle simulation problems that exceed the physical memory of your
computer. If you initially select the Direct Sparse
solver and due to limited memory resources it has reached an
out-of-core solution, a warning message alerts you to switch to the
Large Problem Direct Sparse.
The Direct
Sparse and Intel Direct Sparse solvers are more efficient than the
FFEPlus and Direct Sparse solvers at taking advantage of multiple
cores.
|
Intel
Direct Sparse |
The Intel Direct Sparse solver is
available for static, thermal, frequency, linear dynamic, nonlinear
studies, and topology studies. By leveraging enhanced
memory-allocation algorithms and multi-core processing capability,
the Intel Direct Sparse solver improves solution speeds for
simulation problems that are solved in-core. For most cases, the Intel Direct Sparse solver is
faster than the Direct Sparse. When the size of the model
exceeds the maximum memory available, the Large Problem Direst
Sparse is the most efficient solver.
The Direct Sparse and Intel Direct Sparse
solvers are more efficient at taking advantage of multiple
cores.
|
Choosing a Solver
The Automatic choice for a solver is the default option for Static, Frequency, Buckling, and Thermal studies.
In the case of multi-area contact problems, where the area of contact is found through several contact iterations, the Direct Sparse solver is preferred.
While all solvers are efficient for small problems (25,000 DOFs or less), there can be big differences in performance (speed and memory usage) in solving large problems.
If a solver requires more memory than available on the computer, then the solver uses disk space to store and retrieve temporary data. When this situation occurs, you get a message saying that the solution is going out of core and the solution progress slows down. If the amount of data to be written to the disk is very large, the solution progress can be extremely slow. In these cases (for static and nonlinear studies), use the Large Problem Direct Sparse.
The following factors help you choose the proper solver:
Size of the problem |
In general, FFEPlus is faster in
solving problems with degrees of freedom (DOF) over 100,000. It
becomes more efficient as the problem gets larger. |
Computer resources: Available RAM
and number of CPUs (core or processors) |
The Direct Sparse solver requires
about 10 times more RAM than the FFEPlus solver. It becomes faster
with more memory available on your computer. The Large Problem
Direct Sparse leverages multicore processing capability and improves
solution speed for static and nonlinear studies. |
Material properties |
When the moduli of elasticity of
the materials used in a model are very different (like Steel and
Nylon), then iterative methods could be less accurate than direct
methods. The direct solvers are recommended in such cases. |
Analysis features |
Analysis with No Penetration
contacts and Bonded contacts enforced using constraint equations
will typically solve faster with the direct solvers. |
Depending on the study type, the following recommendations apply:
Static |
Use the Direct Sparse and Large
Problem Direct Sparse when you have enough RAM and multiple CPUS for solving:
- Models with No Penetration contact,
especially when you turn on the friction effects.
- Models with parts that have widely different
material properties.
- Mixed-mesh models
For a linear static analysis,
the Direct Sparse solver requires 1GB of RAM for every
200,000 degrees of freedom (dof). The iterative FFEPlus
solver is less demanding on memory (approximately
2,000,000 dof / 1GB of RAM).
|
Frequency and Buckling |
Use the FFEPlus
solver to calculate any rigid body modes. A body without any
restraints has six rigid body modes.
Use the Direst Sparse
and Intel Direst Sparse solvers for:
- Considering the effect of loading on the
natural frequencies
- Models with parts that have widely different
material properties.
- Models where incompatible mesh is bonded
using constraint equations.
- Adding soft springs to stabilize
inadequately supported models (buckling studies).
Simulation uses the Subspace iteration method as the
eigenvalue extraction method for the Direst Sparse solver,
and the Lanczos method for the FFEPlus and Large Problem
Direct Sparse solvers. It is more efficient to use Lanczos
with iterative solvers like FFEPlus.
Subspace can utilize the back and forth
substitution of the Direct (Sparse) solvers within its
iteration loop to evaluate the eigenvectors (only needs to
decompose the matrix once.) That is not possible with
iterative solvers.
|
Thermal |
Thermal problems have one degree
of freedom (DOF) per node, and hence their solution is usually much
faster than structural problems of the same number of nodes. For
very large problems (larger than 500.00 dofs), use the FFEPlus
solver. |
Nonlinear |
For Nonlinear studies of models
that have more than 50,000 degrees of freedom, the FFEPlus solver is
more effective in giving a solution in a smaller amount of time. The
Large Problem Direct Sparse solver can handle cases where the
solution is going out of core. |
Solver Status
The Solver Status window appears when you run a study. In addition to progress information, it displays:
- Memory usage
- Elapsed time
- Study-specific information such as degrees of freedom, number of
nodes, number of elements
- Solver information such as solver type
- Warnings
The Intel Direct Sparse solver does not provide a solver progress report status.
All studies that use the FFEPlus (iterative) solver (except frequency and
buckling) let you access the convergence plot and solver parameters. The convergence
plot helps you visualize how the solution is converging. The solver parameters let
you manipulate the solver iterations so that you can either improve accuracy or
improve speed. You can either use the solver's preset values or change:
- Maximum number of iterations (P1)
- Stopping threshold (P2)
To improve accuracy, decrease the stopping threshold value. In slowly converging
situations, you can improve speed by increasing the stopping threshold value or by
decreasing the maximum number of iterations (with the understanding that the results
accuracy can be affected.)