Run Options
- The OptiStruct Script
- The Altair Compute Console
- HyperMesh
For more information on Altair Compute Console, navigate to from the Altair Compute Console GUI.
Option | Argument | Description | Supported Platform |
---|---|---|---|
-acf |
N/A | Option to specify that the input file is an ACF file for a multibody dynamics solution sequence. | All Platforms |
-aif |
N/A | An internal option generated automatically when a job is run from
the Compute Console (ACC) (for OptiStruct runs) with
Use Solver Control (-screen manual option entry) turned
ON. -aif is added to the command line to allow
Abort/Stop functions. The -aif start option is an internal option
and is not part of the Compute Console (ACC) Options Selector and can be
ignored. |
All Platforms |
-altver |
Alternate Version | Controls the alternate version of the OptiStruct executable to be used. The OptiStruct executables are available in the following
folder within both Linux and Windows executables.
Note:
|
All Platforms |
-amls |
YES, NO | Invokes the external AMLS eigenvalue solver. The
AMLS_EXE environment variable needs to point to the AMLS
executable for this setting to work.Overrides the PARAM, AMLS setting in the input file. (Example:
|
Linux |
-amlsncpu |
Integer > 1 | Defines the number of CPUs to be used by the external AMLS
eigenvalue solver. OptiStruct and AMLS can be run with different allocations of processors. For example, OptiStruct can be run with 1 processor and AMLS with 4 processors in the same run. Only valid with Overrides the PARAM, AMLSNCPU setting in the input file. Default = 1 (Example: |
Linux |
-amlsmem |
Memory in GB <Real> |
Defines the amount of memory in Gigabytes to be used by the
external AMLS eigenvalue solver. This run option is only supported for AMLS
versions 5 and later. Note:
|
Linux |
-amses |
YES/BOTH | Invokes the AMSES eigenvalue solver.
Note: When the executable is run directly (not recommended),
then
-amses (without any other arguments) will activate AMSES
for the structural model. This is not possible when running using the script or
the Compute Console (ACC). |
All Platforms |
-analysis |
N/A | Submit an analysis run even if optimization cards are present in
the model. This option will still read and check the optimization data and the job
will be terminated if any errors exist. (see -optskip below, to
submit the analysis run without performing the check on optimization data).The
Cannot
be used with (Example: If an OptiStruct optimization model has DOPTPRM, DESMAX, 0, the number of license units checked out is the same as the checkout for an OptiStruct analysis job. |
All Platforms |
-asp |
N/A | When INCLUDE files are used, this option
strips the path from all filenames in the input file, leaving only a root name
with extension. This forces all input and output files to be located inside the
same folder (or subfolders defined using the -incpath
option).When both
-asp and -incpath run
options are used, the path defined with -incpath is also
stripped from all but the actual folder’s name and is treated relative to the
run folder.Note: This option is useful only when automated scripts are used to
submit jobs to multi-user (local or remote) machines.
|
All Platforms |
-buildinfo |
N/A | Displays build information for selected solver executables. | OptiStruct |
-check |
N/A | Submit a check job through the command line. The memory needed is automatically allocated. Cannot be used with
(Example: |
All Platforms |
-checkel |
YES,
NO, FULL
Note: An argument for
-checkel is optional. If an
argument is not specified, the default argument (YES) is
assigned. |
(Example: (Example: |
All Platforms |
-compose |
{compose_installation_path} | This option specifies the location of the Compose installation directory. This run option must be used when Altair Compose is invoked during an OptiStruct run, in case of an external response (DRESP3). (Example: For further information, refer to External Responses using Altair Compose. |
All Platforms |
-compress |
N/A | Submits a compression run. Reduces out matching material and property definitions. Property definitions referencing deleted material definitions are updated with the retained matching material definition (reduction of property definition occurs after this process). Element definitions referencing deleted property definitions are updated with the retained matching property definition. The resulting Bulk Data file will be written to a file named <filename>.echo. It is assumed that there is no optimization, nonlinear or thermal-material data in the Bulk Data. If such data are present in the input file, the resulting file (<filename>.echo) may not be valid. The
(Example:
Refer to Compression Run for more information. |
All Platforms |
-core |
in, out, min |
The solver assigns the appropriate memory required. If there is not enough
memory available, OptiStruct will error out.
Overwrites the (Example: |
All Platforms |
-cores |
auto, N | Total number of cores to be used for an MPI job. For both
arguments (auto and N),
np =
2 will be enforced.
Note: Number of
cores = np *
nt .Examples:
|
All platforms |
-cpu ,
-proc , -nproc , -ncpu ,
-nt or -nthread |
Number of cores | Number of cores to be used for SMP solution. (Example:
|
All Platforms |
-ddm |
N/A | Runs MPI-based Hybrid Shared/Distributed Memory Parallelization (SPMD) in Domain Decomposition
Mode. DDM is activated by default when any MPI run is requested by specifying
|
Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. |
-ddmngrps |
Number of MPI process groups < Integer ≥1, MAX, MIN, AUTO, or -1> |
Note: The
-ddmngrps run option is also available for this
feature. If both are defined simultaneously, then
PARAM,DDMNGRPS is overwritten by the
options defined on -ddmngrps run
option.
Refer to DDM Level 1 – Task-based parallelization in Domain Decomposition Method (DDM). |
All Platforms |
-delay |
Number of seconds | Delays the start of an OptiStruct run
for the specified number of seconds. This functionality does not use licenses,
computer memory or CPU before the start of the run (the delay expires). This
option may be used to wait for any computing resources which are currently
usable but are not yet freed by the previous run. If the previous run crashed,
the license may still be locked by the license server depending on the timeout
value, or the memory may still be reserved by the Operating System.
Note:
|
All Platforms |
-dir |
N/A | The run location is changed to the directory where the input file is located before the run is initiated. | All Platforms |
-ffrs |
YES/NO | Invokes the external FastFRS (Fast Frequency Response Solver)
solver. The FASTFRS_EXE environment variable should point to the
FastFRS executable for this setting to work.Overrides the PARAM,FFRS setting in the input file. (Example: |
Linux |
-ffrsncpu |
1, 2, or 4 | Defines the number of CPUs to be used by the external FastFRS
eigenvalue solver. This parameter will set the environment variable
OMP_NUM_THREADS .The default value is the current value of
OMP_NUM_THREADS . Note: This value can be set by the command
line arguments
–nproc or
–ncpu .OptiStruct and FastFRS can be run with different allocations of processors. For example, OptiStruct can be run with 1 processor and FastFRS with 4 processors in the same run. Valid only when the
Overrides the PARAM, FFRSNCPU setting in the input file. Default: Number of processors used by OptiStruct. (Example: |
Linux |
-ffrsmem |
Memory in GB <Real> |
Defines the amount of memory in Gigabytes to be used by the
external FastFRS eigenvalue solver. This run option is only supported for FastFRS
versions 2 and later. Note:
|
Linux |
-filemap |
<filename | This option is useful internally, during the submission of a job
to a remote server or queuing system. Usually, during submission of the job to a remote server or queuing system, filenames specified in the input deck cannot be used (for example, when they contain Windows-specific names, and the remote server is using Linux). This option is also used when the input file consists of INCLUDE files with identical names (in different locations). The contents of <filename> specify a dictionary of filenames (a list of filename pairs, one filename per line). The order of pairs in the file is irrelevant. Before opening any input file OptiStruct will check this map, and instead of opening the file as defined by the input deck, it will open the corresponding filename in the dictionary. Usage of
When The contents of <filename> are split in two groups. The first group refers to the names used on include
statements. For these entries, the first name in a pair must be a single
asterisk and a number (n). This indicates a map to the nth file
listed by The second group is the list of other names. The first name is compared verbatim to the content of the one used on ASSIGN, RESTART, and similar cards). Example (notice that spaces at the start of each line are
optional, added here for readability):
The use of this filemap will result in substituting /scratch/file_155.fem for the second INCLUDE file and file /scratch/folder/restart.txt will be substituted instead of any file referenced as 'restart.sh' with no path (presumably on the RESTART card). The comparison is
always case sensitive, even on Windows hosts which disregard the case in
filenames.
Note: This option is useful only when automated scripts are used to
submit jobs to multi-user (local or remote) machines.
|
All Platforms |
-fixlen |
RAM (In MB, by default) See Comment 5 |
Disables dynamic memory allocation OptiStruct will allocate the given amount of memory and use
it throughout the run. If this memory is not available, or if the allocated
amount is not sufficient for the solution process, OptiStruct will terminate with an error.
CAUTION:
To avoid over specifying the memory when using this option, it is
suggested first to run OptiStruct with the
This option allows, on certain platforms, to avoid memory fragmentation and allocate more memory than is possible with dynamic memory allocation. Overwritten
by (Example:
|
All Platforms |
-gpu |
N/A | Activates GPU computing. | All Platforms |
-gpuid |
N | N: Integer, Optional, Selects the GPU card. Default = 1 |
All Platforms |
-h |
N/A | Displays script usage. | All Platforms |
-hostfile |
<filename> | This option allows usage of multiple hosts (machines) for a MPI
run, by specifying the list of hosts in a separate file
(<filename> ).When Examples: Let hostfile.txt be the file that contains the list of hosts, with the following contents.
Example
1:
-np is specified.
In
this example, 4 MPI processes will be used for each host ( Example 2:
host1 and host2 ). |
All Platforms |
-hostmem |
<yes ,
no , blank> |
This run option is available for MPI runs.
|
All Platforms |
-hosts |
List of host names (comma separated) | This option allows usage of multiple hosts (machines) for a MPI
run, by directly specifying the hosts as argument to the run option. When this
option is used, the number of MPI processes ( (Example: |
All Platforms |
-incpath |
<path> | The incpath option modifies the search for files
defined in the input deck $ INCLUDE <filename> . All folders defined on this option are used to search for any include file defined in the input deck. If the additional option If this option is used, then incpath from .cfg file has no effect. When
<filename> on The
Example: |
All Platforms |
-inventory |
N/A | This option forces OptiStruct into a
short run, which produces a special file named
<filename>.flst (.flst file). This
file contains a list of all input files needed for a run and their actual
locations. Example: *.flst file uses XML
format.
In this file, the OptiStruct input files are referenced by
‘
In the presence of In part-instance mode, it is expected that the same file may be
read multiple times, and this will result in the same line repeated multiple
times. This is consistent with the .flst file created with
|
All Platforms |
-len |
RAM (In MB, by default) See Comment 5 |
Preferred upper bound on dynamic memory allocation. When
different algorithms can be chosen, the solver will try to use the fastest
algorithm which can run within the specified amount of memory. If no such
algorithm is available, the algorithm with minimum memory requirement will be
used. For example, the sparse linear solver, which can run in-core, out-of-core
or min-core will be selected. The Default = 8000 MB. (Example: Best
practices for
–len specification: For proper memory allocation
while using –len in an OptiStruct
run, avoid using the exact reported memory estimate value (for example, Using
Check). The –len value should be provided based on the actual
memory of the system. This would be the recommended memory limit to run the job,
it may not necessarily represent the memory utilized by the job or the actual
memory limit. This way, the job is more likely to run with the best possible
performance. If the same system is shared by multiple jobs, the memory
allocation should follow the same procedure as above; except, that the
individual maximum memory should be used in place of the total system memory. If
a job runs out-of-core instead of in-core (it exceeded the memory allocation) it
will still run very efficiently. However, make sure that the job does not exceed
the actual memory of the system itself as this will slow the run down by a large
factor. The recommended method to deal with this is to specify
–maxlen as the actual memory of the system to limit the
maximum memory that can be used on the system.Note: If a value
greater than 16 GB is specified, the internal long (64-bit) integer sparse
direct solver is activated automatically.
|
All Platforms |
-lic |
FEA, OPT |
The solver checks out a license of the specified type before reading the input data. Once the input data is read, the solver verifies that the requested license is of the correct type. If this is not the case, OptiStruct will terminate with an error. No default (Example: |
All Platforms |
-licwait |
Hours to wait for a license to become available Note: An argument for
–licwait is optional. If the
argument is not specified, the default argument (12) is assigned. |
If -licwait is present and
sufficient Altair Units (AU) are not available,
OptiStruct will wait for up to the number of hours
specified (default=12) for licenses to become available and then will start to
run. The maximum wait period that can be specified to wait is 168 hours (a week).
OptiStruct will check for available Altair Units every two minutes.Note: If sufficient units are
not available initially, OptiStruct will wait for
two minutes before checking again. Therefore, this process does not guarantee
any place in queue for license checkout. If sufficient units are checked back in
to the license server inside the two-minute window, but another process requests
the AU’s before OptiStruct checks again, the units
will be taken up by the other process, and OptiStruct will continue to wait until enough AU’s are available at the time it checks
for their availability (every two minutes).
|
All Platforms |
-localfilesonly |
N/A | Similar to -asp , but it affects
only the INCLUDE paths. |
All Platforms |
-manual |
N/A | Launches the online OptiStruct User Guide. | All Platforms |
-maxlen |
RAM (In MB, by default) See Comment 5 |
Hard limit on the upper bound of dynamic memory
allocation. OptiStruct will not exceed this limit. No default (Example: |
All Platforms |
-minlen |
RAM (In MB, by default) See Comment 5 |
Hard limit on the lower bound of dynamic memory
allocation. This is the minimum amount of memory allocated in the dynamic memory allocation process and OptiStruct will not go below this limit. Default = 10% of (Example:
|
All Platforms |
-mmo |
N/A | The –mmo option can be used to run
multiple optimization models in a single run. |
Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. |
-monitor |
N/A | Monitor convergence from an optimization or nonlinear run. Equivalent to SCREEN, LOG in the input deck. | All Platforms |
-mpi |
i (Intel
MPI), pl (IBM Platform-MPI (formerly HP-MPI)), ms (MS-MPI), pl8 (for
versions 8 and newer of IBM Platform-MPI)
Note: An argument for
–mpi is optional. If an argument is not specified, Intel
MPI is used by default. |
Specify the Message Passing Interface (MPI) type for
use MPI-based SPMD runs on supported platforms. Specify the Message Passing Interface (MPI) type for use MPI-based SPMD runs on supported platforms. |
Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. |
-mpiargs |
(arguments for
mpirun > |
This run option can be used in MPI-based parallelization runs to
specify additional arguments for mpirun .Note: This
option is valid for an MPI run only. (Example:
optistruct infile.fem
–mpi i –np 4 –mpiargs “<args_for_mpirun>” ). |
Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. |
-mpipath |
path | Specify the directory containing HP-MPI’s mpirun
executable.Note: This option is useful if MPI environments from
multiple MPI vendors are installed on the system. Valid for an MPI run
only.
(Example: |
Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. |
-ncpu |
Number of cores | Same as -cpu |
All Platforms |
-ngpu |
Number of GPUs | N: Integer, Identifies the number of GPU cards to be used for the
solution. Default = 1. Maximum = 8 |
All Platforms |
-nlrestart |
Subcase ID | Restart an explicit dynamic solution sequence from specified
Subcase ID. If Subcase ID is not specified, it will restart from the first
explicit dynamic subcase ending with error in previous run.
Note: The
explicit dynamic solution sequence is a series of explicit dynamic subcases
(ANALYSIS=EXPDYN) linked by
CNTNLSUB.
|
All Platforms |
-np |
Total number of MPI processes for MPI runs | Total number of MPI processes to be used in MPI runs in SPMD.
Even if multiple nodes are used in a cluster MPI run, -np still
indicates the total number of MPI process for the entire run across multiple
cluster nodes.Note: If
-nt is not defined, then it is recommended
that -np should be set lower than the total number of available
cores. If -nt is specified in addition to -np ,
then the it is recommended that -np *-nt should
not exceed the total number of available cores. For more detailed information,
refer to the Hybrid Shared/Distributed Memory Parallelization (SPMD).(Example: |
Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. |
-nproc |
Number of cores | Same as -cpu |
All Platforms |
-nt |
Number of cores | Same as -cpu |
All Platforms |
-nthread |
Number of cores | Same as -cpu |
All Platforms |
-optskip |
N/A | Submit an analysis run without performing check on optimization
data (skip reading all optimization related cards). Cannot be used with
(Example:
|
All Platforms |
-out |
N/A | Echos the output file to the screen. This takes precedence over
the SCREEN I/O Options Entry. (Example: |
All Platforms |
-outfile |
Prefix for output filenames | Option to direct the output files to a directory different from
the one in which the input file exists. If such a directory does not exist, the
last part of the path is assumed to be the prefix of the output files. This takes
precedence over the OUTFILE I/O Options Entry. (Example:
|
All Platforms |
-proc |
Number of cores | Same as -cpu |
All Platforms |
-radopt |
Run Radioss optimization in OptiStruct | Option to run Radioss optimization in
OptiStruct. A Radioss
optimization file <name>.radopt should be input to
OptiStruct and the optional –radopt
run option may be specified to request an optimization run for a Radioss input deck.Note: The Radioss Starter and input files supporting the optimization
input should be available in the same directory as the
<name>.radopt file.
Refer to Design Optimization in the User Guide for more information. |
All Platforms |
-ramdisk |
Size of virtual disk (In MB, by
default) See Comment 5 |
Option to specify area in RAM allocated to store information
which otherwise would be stored in scratch files on the hard drive. The upper limit of RAMDISK for Compute Console (ACC) or OptiStruct script is 10,000,000 (10 TB). (Example:
For a more detailed description, see the RAMDISK setting on SYSSETTING I/O Options Entry. |
All Platforms |
-reanal |
Density threshold | This option can only be used in combination with
-restart . Inclusion of this option on a restart run will cause the last iteration to be reanalyzed without penalization. If the "density threshold" given is less than the value of MINDENS (default = 0.01) used in the optimization, all elements will be assigned the densities they had during the final iteration of the optimization. As there is no penalization, stiffness will now be proportional to density. If the "density threshold" given is greater than the value of MINDENS, those elements whose density is less than the given value will have density equal to MINDENS, all others will have a density of 1.0. Example: |
All Platforms |
-restart |
filename.sh | Specify a restart run. If no argument is provided, OptiStruct will look for the restart file, which will have
the same root as the input file with the extension .sh. If
you enter an argument on PC, you will need to provide the full path to the restart
file including the file name. Cannot be used with (Example:
(Example: |
All Platforms |
-rnp |
Number of processors | Number of processors to be used in Hybrid Shared/Distributed Memory Parallelization (SPMD) for EXPDYN
analysis. (Example: |
All Platforms |
-rnt |
Number of cores | Number of cores to be used for OptiStruct SMP for EXPDYN analysis.
(Example: |
All Platforms |
-rsf |
Safety factor | Specify a safety factor over the limit of allocated memory. Not
applicable when (Example:
(Example:
(Example:
|
All Platforms |
-savelog |
N/A | Saves the screen output to a permanent file named <filename>.log. This can be useful during debugging, as OptiStruct prints some messages only to screen. The SCREEN I/O Options Entry is required in conjunction for maximum information to be printed to the .log file. | All Platforms |
-scr or
-tmpdir |
path, filesize=n, slow=1 | Option to choose directories in which the scratch files are to be
written. filesize=n and slow=1 arguments are
optional. Multiple arguments may be separated by a comma.
(Example: Multiple scratch
directories may be defined through repeated instances of
(Example:
This overwrites the
environment variable For a more detailed description, see the TMPDIR I/O Options Entry. |
All Platforms |
-scrfmode |
basic, buffered, unbuffer, smbuffer, stripe, mixfcio | Option to select different mode of storing scratch files for
linear solver (especially for out-of-core and minimum-core solution modes).
Multiple arguments may be comma separated. (Example: For a description of the arguments, see the SCRFMODE setting on SYSSETTING I/O Options Entry. |
All Platforms |
-testmpi |
N/A | Check if MPI is configured properly and if the SPMD version of
the OptiStruct executables is available for this
system. (Example: |
All Platforms |
-sp |
N/A | Option to select the Single Precision executable for the run. This allows you to select the 64 bit Integer 32 bit Floating Point Build for either the SMP or MPI runs. | All Platforms |
-v |
Version | Controls the version of the OptiStruct executable to be used. The OptiStruct executables
are available in the following folder within both Linux and Windows executables.
Note: By default, the highest version from within the available executables
will be used if the
-v option is not
defined. |
|
-version |
N/A | Checks version and build time information from OptiStruct. | All Platforms |
-xml |
N/A | Option to specify that the input file is an XML file for a multibody dynamics solution sequence. | All Platforms |
Comments
- Any arguments containing spaces or special
characters must be quoted in {}, for example:
-mpipath
{C:\Program Files\MPI}. File paths on Windows may use backward "\" or forward slash "/" but must be within quotes when using a backslash "\". - Currently, the solver executable (OptiStruct) does not have a specific limit on the number of
processors/cores assigned to the SMP part of the run
(
-nt
/-nthread
). - The above arguments are processed by solver script(s) and not by the actual executable. If you are developing internal scripts which use the executable directly, you may get specific information about command line arguments that are accepted by the executable by looking at the content of the .stat file, where these arguments are listed for each run.
- The order of the above options is arbitrary. However, options for
which arguments are optional should not be followed immediately by the
INPUT_FILE_NAME
argument. - In case of memory related options
(
-len
,-fixlen
,-minlen
,-maxlen
and-ramdisk
), the default unit of memory is MB. However, suffixes M/m and G/g can be used to represent memory in MB and GB, respectively.Examples:
-minlen
2G or-minlen
2000M