# Run Options

The run options provided below can be used for OptiStruct jobs submitted through one of the following methods:
• The OptiStruct Script
• The Altair Compute Console
• HyperMesh

For more information on Altair Compute Console, navigate to Help > Compute Console Manual from the Altair Compute Console GUI.

Option Argument Description Supported Platform
-acf N/A Option to specify that the input file is an ACF file for a multibody dynamics solution sequence. All Platforms
-aif  N/A An internal option generated automatically when a job is run from the Compute Console (ACC) (for OptiStruct runs) with Use Solver Control (-screen manual option entry) turned ON. -aif is added to the command line to allow Abort/Stop functions. The -aif start option is an internal option and is not part of the Compute Console (ACC) Options Selector and can be ignored. All Platforms
-altver Alternate Version Controls the alternate version of the OptiStruct executable to be used.
The OptiStruct executables are available in the following folder within both Linux and Windows executables.
Linux

$ALTAIR_HOME/hwsolvers/optistruct/bin/linux64 Example Linux executable name: optistruct_2020_linux64_h3d19 To pick this executable, use the following run options: -v 2020 -altver h3d19 The Alternate Version argument for -altver option is anything that is present after linux64_ for serial executables and between linux64_ and _impi within the OptiStruct executable name (in this example, to pick this executable the Alternate Version argument should be h3d19). By default, the serial executable is picked, and if MPI or GPU executables are required, then corresponding run options, like, -ddm or -fso, should be used in conjunction with -altver. Windows$ALTAIR_HOME\hwsolvers\optistruct\bin\win64
Example Windows executable name: optistruct_2020_win64_h3d19_impi.exe
To pick this executable, use the following run options:
-ddm -v 2020 -altver h3d19
The Alternate Version argument for -altver option is anything that is present after win64_ for serial executables and between win64_ and _impi within the OptiStruct executable name (in this example, to pick this executable the Alternate Version argument should be h3d19). By default, the serial executable is picked, and if MPI or GPU executables are required, then corresponding run options, like, -ddm or -fso, should be used in conjunction with -altver.
Note:
1. By default, the highest version from within the available executables will be used if the -v option is not defined.
2. By default, the H3D 14 executable is picked unless H3D 19 executables are requested using -altver.
All Platforms
-amls YES, NO Invokes the external AMLS eigenvalue solver. The AMLS_EXE environment variable needs to point to the AMLS executable for this setting to work.

Overrides the PARAM, AMLS setting in the input file.

(Example: optistruct infile.fem -amls yes)

Linux
-amlsncpu Integer > 1 Defines the number of CPUs to be used by the external AMLS eigenvalue solver.

OptiStruct and AMLS can be run with different allocations of processors. For example, OptiStruct can be run with 1 processor and AMLS with 4 processors in the same run.

Only valid with -amls run option or when PARAM, AMLS is set to YES.

Overrides the PARAM, AMLSNCPU setting in the input file.

Default = 1

(Example: optistruct infile.fem -amls yes -amlsncpu 4)

Linux
-amlsmem Memory in GB

<Real>

Defines the amount of memory in Gigabytes to be used by the external AMLS eigenvalue solver. This run option is only supported for AMLS versions 5 and later.
Note:
1. This run option will override the memory value set by PARAM, AMLSMEM in the input file and the environment variable AMLS_MEM.
2. This run option is valid only if -amls or PARAM, AMLS is set to YES.
3. The minimum memory value allowed is equal to 1 GB. If a value lower than 1 GB is specified, it is automatically reset to 1 GB.
Linux
-amses YES/BOTH Invokes the AMSES eigenvalue solver.
YES
Activates AMSES for the structural model.
BOTH
Activates AMSES for both the structure and fluid parts of the model.
Note: When the executable is run directly (not recommended), then -amses (without any other arguments) will activate AMSES for the structural model. This is not possible when running using the script or the Compute Console (ACC).
All Platforms
-analysis N/A Submit an analysis run even if optimization cards are present in the model. This option will still read and check the optimization data and the job will be terminated if any errors exist. (see -optskip below, to submit the analysis run without performing the check on optimization data).

The -analysis run option will have no effect, if the CHECK entry is present in the control section.

Cannot be used with -check or -restart.

(Example: optistruct infile.fem -analysis)

If an OptiStruct optimization model has DOPTPRM, DESMAX, 0, the number of license units checked out is the same as the checkout for an OptiStruct analysis job.

All Platforms
-asp N/A When INCLUDE files are used, this option strips the path from all filenames in the input file, leaving only a root name with extension. This forces all input and output files to be located inside the same folder (or subfolders defined using the -incpath option).
When both -asp and -incpath run options are used, the path defined with -incpath is also stripped from all but the actual folder’s name and is treated relative to the run folder.
Note: This option is useful only when automated scripts are used to submit jobs to multi-user (local or remote) machines.
All Platforms
-buildinfo N/A Displays build information for selected solver executables. OptiStruct
-check N/A Submit a check job through the command line.

The memory needed is automatically allocated.

Cannot be used with -analysis, -optskip or -restart.

(Example: optistruct infile.fem -check)

All Platforms
-checkel YES, NO, FULL
Note: An argument for -checkel is optional. If an argument is not specified, the default argument (YES) is assigned.
NO
Element quality checks are not performed, but mathematical validity checks are performed.
YES (Default)
Or, if no argument is given, the geometric quality of each element is checked. Any violation of the error limits is counted as a fatal error and the run will stop. Any violation of warning limits is non-fatal. Error or warning messages are printed for elements violating the limits along with the offending property values. The amount of output is limited to the first 3 occurrences for each individual case, plus a summary table of all errors.
FULL
The same checks are performed as for YES, but the error or warning messages are printed for all of the elements violating the error or warning limits.

(Example: optistruct infile.fem -checkel full )

(Example: optistruct infile.fem -checkel)

All Platforms
-compose {compose_installation_path} This option specifies the location of the Compose installation directory.

This run option must be used when Altair Compose is invoked during an OptiStruct run, in case of an external response (DRESP3).

(Example: optistruct_infile.fem –compose {compose_installation_path})

For further information, refer to External Responses using Altair Compose.

All Platforms
-compress N/A Submits a compression run.

Reduces out matching material and property definitions.

Property definitions referencing deleted material definitions are updated with the retained matching material definition (reduction of property definition occurs after this process).

Element definitions referencing deleted property definitions are updated with the retained matching property definition. The resulting Bulk Data file will be written to a file named <filename>.echo.

It is assumed that there is no optimization, nonlinear or thermal-material data in the Bulk Data. If such data are present in the input file, the resulting file (<filename>.echo) may not be valid.

The -compress run option cannot be used in combination with any other option as OptiStruct terminates the run after the .echo file is generated.

(Example: optistruct infile.fem -compress)

All Platforms
-core in, out, min
in
in-core solution is forced
out
out-of-core solution is forced
min
Minimum core solution is forced

The solver assigns the appropriate memory required. If there is not enough memory available, OptiStruct will error out. Overwrites the -len option.

(Example: optistruct infile.fem -core in)

All Platforms
-cores auto, N Total number of cores to be used for an MPI job.
For both arguments (auto and N), np = 2 will be enforced.
auto
All the cores available in the machine are used. The value of nt will be determined as, nt = .
N
User-defined value for the number of cores.
The value of nt will be determined as, nt = .
Note: Number of cores = np * nt.

Examples:

optistruct infile.fem –cores auto

optistruct infile.fem –cores 12

All platforms
-cpu, -proc, -nproc, -ncpu, -nt or -nthread Number of cores Number of cores to be used for SMP solution.

(Example: optistruct infile.fem -ncpu 2)

All Platforms
-ddm  N/A Runs MPI-based Hybrid Shared/Distributed Memory Parallelization (SPMD) in Domain Decomposition Mode.

DDM is activated by default when any MPI run is requested by specifying -np (in addition, MPI process grouping via -ddmngrps AUTO or PARAM,DDMNGRPS,AUTO is also automatically activated by default for any MPI run). MMO or FSO MPI runs are only active if explicitly identified via -mmo or -fso. Otherwise, for any MPI run with -np, DDM is activated by default.

Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms.
-ddmngrps Number of MPI process groups

< Integer ≥1, MAX, MIN, AUTO, or -1>

Integer ≥1
Identifies the number of groups into which the specified MPI processes (-np) are divided. This parameter is supported for DDM Level 1 parallelization via:
• Global Search Option (DGLOBAL)
AUTO or -1 (Default)
This option is only supported for the Task-based DDM parallelization solution (first option mentioned above). It will heuristically determine the number of groups that are required for the specified model.
MAX
Automatically enforces the maximum number of groups (which is equivalent to a pure Level 1 DDM task-based parallelization run).
MIN
Automatically enforces the minimum number of groups (which is equivalent to a purely geometric partition based parallelization (Level 2 DDM) run)
Note: The -ddmngrps run option is also available for this feature. If both are defined simultaneously, then PARAM,DDMNGRPS is overwritten by the options defined on -ddmngrps run option.

-ddmngrps should not be used in conjunction with -mmo run option.

Refer to DDM Level 1 – Task-based parallelization in Domain Decomposition Method (DDM).

All Platforms
-delay Number of seconds Delays the start of an OptiStruct run for the specified number of seconds. This functionality does not use licenses, computer memory or CPU before the start of the run (the delay expires).
This option may be used to wait for any computing resources which are currently usable but are not yet freed by the previous run. If the previous run crashed, the license may still be locked by the license server depending on the timeout value, or the memory may still be reserved by the Operating System.
Note:
• The –delay option applies to the current job. If the job is submitted to the Compute Console (ACC) queue, then the delay will start after the Compute Console (ACC) releases this job to run.
• If the run is started using the Compute Console (ACC), the Schedule delay option should be used to delay starting the queue.
All Platforms
-dir N/A The run location is changed to the directory where the input file is located before the run is initiated. All Platforms
-ffrs YES/NO Invokes the external FastFRS (Fast Frequency Response Solver) solver. The FASTFRS_EXE environment variable should point to the FastFRS executable for this setting to work.

Overrides the PARAM,FFRS setting in the input file.

(Example: optistruct infile.fem –ffrs yes)

Linux
-ffrsncpu 1, 2, or 4 Defines the number of CPUs to be used by the external FastFRS eigenvalue solver. This parameter will set the environment variable OMP_NUM_THREADS.
The default value is the current value of OMP_NUM_THREADS.
Note: This value can be set by the command line arguments –nproc or –ncpu.

OptiStruct and FastFRS can be run with different allocations of processors. For example, OptiStruct can be run with 1 processor and FastFRS with 4 processors in the same run.

Valid only when the –ffrs run option or PARAM, FFRS is set to YES.

Overrides the PARAM, FFRSNCPU setting in the input file.

Default: Number of processors used by OptiStruct.

(Example: optistruct infile.fem –ffrs yes –ffrsncpu 4)

Linux
-ffrsmem Memory in GB

<Real>

Defines the amount of memory in Gigabytes to be used by the external FastFRS eigenvalue solver. This run option is only supported for FastFRS versions 2 and later.
Note:
1. This run option will override the memory value set by PARAM, FFRSMEM in the input file and the environment variable FFRS_MEM.
2. This run option is valid only when the –ffrs run option or PARAM, FFRS is set to YES.
3. The minimum memory value allowed is equal to 1 GB. If a value lower than 1 GB is specified, it is automatically reset to 1 GB.
Linux
-filemap <filename This option is useful internally, during the submission of a job to a remote server or queuing system.

Usually, during submission of the job to a remote server or queuing system, filenames specified in the input deck cannot be used (for example, when they contain Windows-specific names, and the remote server is using Linux). This option is also used when the input file consists of INCLUDE files with identical names (in different locations).

The contents of <filename> specify a dictionary of filenames (a list of filename pairs, one filename per line). The order of pairs in the file is irrelevant. Before opening any input file OptiStruct will check this map, and instead of opening the file as defined by the input deck, it will open the corresponding filename in the dictionary.

Usage of -filemap assures that all messages produced by OptiStruct will refer to the original setup (for example, show Windows paths, even when job runs on Linux).

When -asp option is also defined, -filemap targets must specify filenames only (without path). -filemap is recommended to be used with -asp option.

The contents of <filename> are split in two groups.

The first group refers to the names used on include statements. For these entries, the first name in a pair must be a single asterisk and a number (n). This indicates a map to the nth file listed by -inventory or in the Optimization Summary section of the .out file. Any potential INCLUDE files used during ASSIGN, UPDATE also belong to this group.

The second group is the list of other names. The first name is compared verbatim to the content of the one used on ASSIGN, RESTART, and similar cards).

Example (notice that spaces at the start of each line are optional, added here for readability):
*2
/scratch/file_155.fem
restart.sh
/scratch/folder/restart.txt

The use of this filemap will result in substituting /scratch/file_155.fem for the second INCLUDE file and file /scratch/folder/restart.txt will be substituted instead of any file referenced as 'restart.sh' with no path (presumably on the RESTART card).

The comparison is always case sensitive, even on Windows hosts which disregard the case in filenames.
Note: This option is useful only when automated scripts are used to submit jobs to multi-user (local or remote) machines.
All Platforms
-fixlen RAM (In MB, by default)

See Comment 5

Disables dynamic memory allocation
OptiStruct will allocate the given amount of memory and use it throughout the run. If this memory is not available, or if the allocated amount is not sufficient for the solution process, OptiStruct will terminate with an error.
CAUTION:
1. Mumps solver, AMLS soler will still do its own memory allocation besides -fixlen.
2. -fixlen is strongly not recommended in DDM run. The reason is the fixed memory in OptiStruct will not be used by MUMPS and are usually wasted (MUMPS is the only solver available in DDM).
3. -fixlen is not recommended by AMSES as it blocks AMSES to dynamically allocate memory for the solution of a large number of eigenvalues.

To avoid over specifying the memory when using this option, it is suggested first to run OptiStruct with the -check option and use the results of that run to properly define the memory size for the -fixlen option.

This option allows, on certain platforms, to avoid memory fragmentation and allocate more memory than is possible with dynamic memory allocation.

Overwritten by -len and -core options.

(Example: optistruct infile.fem - fixlen 500)

All Platforms
-gpu N/A Activates GPU computing. All Platforms
-gpuid N N: Integer, Optional, Selects the GPU card.

Default = 1

All Platforms
-h N/A Displays script usage. All Platforms
-hostfile <filename> This option allows usage of multiple hosts (machines) for a MPI run, by specifying the list of hosts in a separate file (<filename>).

When -np is not specified, each line in the file (<filename>) will represent one MPI process on the corresponding host.

Examples:

Let hostfile.txt be the file that contains the list of hosts, with the following contents.

host1

host2

Example 1: -np is specified.
optistruct infile.fem -np 8 -hostfile hostfile.txt

In this example, 4 MPI processes will be used for each host (host1 and host2).

Example 2: -np is not specified.

optistruct infile.fem -hostfile hostfile.txt
In this example, 1 MPI process will be used for each host (host1 and host2).
All Platforms
-hostmem <yes, no, blank> This run option is available for MPI runs.
yes or blank
This is the default interpretation of memory options defined by -fixlen, -minlen, -maxlen, -len, and -ramdisk. It means that the specified memory options input the total memory of all processes on each host. This is the default starting from OptiStruct 2021.1 (prior to this release, the default behavior was consistent with -hostmem no).
no
Implies that the specified memory options input the memory per MPI process.
• Example for per-host memory setting (Default):
optistruct infile.fem -np 4 -fixlen 100 -hostmem yes

In this example:

If all 4 MPI processes are allocated to 1 host, then each process will use -fixlen = 100/4 = 25.

If 4 processes are allocated to 2 hosts evenly, then each process will use -fixlen = 100/2 = 50.

If 4 processes are allocated to 2 hosts unevenly (1 process to host 1 and 3 processes to host 2), then the process on host 1 will use -fixlen=100 and each process on host 2 will use -fixlen=100/3=33.

This feature can also be set using SYSSETTING,HOSTMEM,YES.

• Example for per-process Memory setting:
optistruct infile.fem -np 4 -fixlen 100 -hostmem no

In this example, each MPI process will use -fixlen = 100.

If -hostmem no is not specified, then the default is -hostmem yes.

This feature can also be set using SYSSETTING,HOSTMEM,NO.

All Platforms
-hosts List of host names (comma separated) This option allows usage of multiple hosts (machines) for a MPI run, by directly specifying the hosts as argument to the run option.

When this option is used, the number of MPI processes (-np) must also be specified.

(Example:optistruct infile.fem -np 4 -hosts host1,host2)

All Platforms
-incpath <path> The incpath option modifies the search for files defined in the input deck $INCLUDE <filename>. All folders defined on this option are used to search for any include file defined in the input deck. If the additional option -localfilesonly is used, then all paths are stripped from INCLUDE cards and include files are searched only in these locations. If this option is used, then incpath from .cfg file has no effect. When <filename> on incpath is a relative path, it is always relative to the run folder (the location where the job is executed). The <path> is used when the search for included file fails for all standard locations. The search then is performed as: • When filename defined on INCLUDE card represents a file with a relative path (or has no path) then that file is searched relative to each incpath defined. • When this fails, then the filename defined on INCLUDE is stripped from the path and searched in each folder defined by each incpath. • Search is stopped after the first matching file is found, and no warning is issued if more than one file could match the rules above; therefore, multiple incpath folders should be used with caution. Example: -incpath {C:/incfolder} All Platforms -inventory N/A This option forces OptiStruct into a short run, which produces a special file named <filename>.flst (.flst file). This file contains a list of all input files needed for a run and their actual locations. Example: *.flst file uses XML format. <results_catalog> <input name="file1.fem"/> <input name="SUBDIR1/inc_11.fem"/> <data name="../temp-tests/testnp100/cms_cbn_testinrel.h3d"/> </results_catalog> In this file, the OptiStruct input files are referenced by ‘input’ and all other files listed in ASSIGN or RESTART are referenced by ‘data'. -inventory stops OptiStruct immediately after it reads all input files, it may not detect any of the errors in the input deck. If both -inventory and -check are present, then full checkrun is also performed. In the presence of -filemap, the content of .flst file will show the actual location of the file, while all other outputs from the solver will hide the file mapping. In part-instance mode, it is expected that the same file may be read multiple times, and this will result in the same line repeated multiple times. This is consistent with the .flst file created with -inventory option, as it will require multiple lines to remap such a file. All Platforms -len RAM (In MB, by default) See Comment 5 Preferred upper bound on dynamic memory allocation. When different algorithms can be chosen, the solver will try to use the fastest algorithm which can run within the specified amount of memory. If no such algorithm is available, the algorithm with minimum memory requirement will be used. For example, the sparse linear solver, which can run in-core, out-of-core or min-core will be selected. The –core option will override the –len option. The default for –len is 8000MB, this means that all except for very small models, OptiStruct will use only the minimum memory needed to run the job. If –len value is larger than the amount of available physical RAM, it may cause excessive swapping during computations, and significantly slow down the solution process. Default = 8000 MB. (Example: optistruct infile.fem –len 32) Best practices for –len specification: For proper memory allocation while using –len in an OptiStruct run, avoid using the exact reported memory estimate value (for example, Using Check). The –len value should be provided based on the actual memory of the system. This would be the recommended memory limit to run the job, it may not necessarily represent the memory utilized by the job or the actual memory limit. This way, the job is more likely to run with the best possible performance. If the same system is shared by multiple jobs, the memory allocation should follow the same procedure as above; except, that the individual maximum memory should be used in place of the total system memory. If a job runs out-of-core instead of in-core (it exceeded the memory allocation) it will still run very efficiently. However, make sure that the job does not exceed the actual memory of the system itself as this will slow the run down by a large factor. The recommended method to deal with this is to specify –maxlen as the actual memory of the system to limit the maximum memory that can be used on the system. Note: If a value greater than 16 GB is specified, the internal long (64-bit) integer sparse direct solver is activated automatically. All Platforms -lic FEA, OPT FEA FE analysis only (OptiStructFEA). OPT Optimization (OptiStruct or OptiStructMulti). The solver checks out a license of the specified type before reading the input data. Once the input data is read, the solver verifies that the requested license is of the correct type. If this is not the case, OptiStruct will terminate with an error. No default (Example: OptiStruct infile.fem -lic FEA) All Platforms -licwait Hours to wait for a license to become available Note: An argument for –licwait is optional. If the argument is not specified, the default argument (12) is assigned. If -licwait is present and sufficient Altair Units (AU) are not available, OptiStruct will wait for up to the number of hours specified (default=12) for licenses to become available and then will start to run. The maximum wait period that can be specified to wait is 168 hours (a week). OptiStruct will check for available Altair Units every two minutes. Note: If sufficient units are not available initially, OptiStruct will wait for two minutes before checking again. Therefore, this process does not guarantee any place in queue for license checkout. If sufficient units are checked back in to the license server inside the two-minute window, but another process requests the AU’s before OptiStruct checks again, the units will be taken up by the other process, and OptiStruct will continue to wait until enough AU’s are available at the time it checks for their availability (every two minutes). All Platforms -localfilesonly N/A Similar to -asp, but it affects only the INCLUDE paths. All Platforms -manual N/A Launches the online OptiStruct User Guide. All Platforms -maxlen RAM (In MB, by default) See Comment 5 Hard limit on the upper bound of dynamic memory allocation. OptiStruct will not exceed this limit. No default (Example: optistruct infile.fem –maxlen 9000) All Platforms -minlen RAM (In MB, by default) See Comment 5 Hard limit on the lower bound of dynamic memory allocation. This is the minimum amount of memory allocated in the dynamic memory allocation process and OptiStruct will not go below this limit. Default = 10% of -len (Example: optistruct infile123.fem –minlen 200) All Platforms -mmo N/A The –mmo option can be used to run multiple optimization models in a single run. Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. -monitor N/A Monitor convergence from an optimization or nonlinear run. Equivalent to SCREEN, LOG in the input deck. All Platforms -mpi i (Intel MPI), pl (IBM Platform-MPI (formerly HP-MPI)), ms (MS-MPI), pl8 (for versions 8 and newer of IBM Platform-MPI) Note: An argument for –mpi is optional. If an argument is not specified, Intel MPI is used by default. Specify the Message Passing Interface (MPI) type for use MPI-based SPMD runs on supported platforms. Specify the Message Passing Interface (MPI) type for use MPI-based SPMD runs on supported platforms. Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. -mpiargs (arguments for mpirun> This run option can be used in MPI-based parallelization runs to specify additional arguments for mpirun. Note: This option is valid for an MPI run only. (Example: optistruct infile.fem –mpi i –np 4 –mpiargs “<args_for_mpirun>”). Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. -mpipath path Specify the directory containing HP-MPI’s mpirun executable. Note: This option is useful if MPI environments from multiple MPI vendors are installed on the system. Valid for an MPI run only. (Example: optistruct infile.fem –mpi –np 4 –mpipath /apps/hpmpi/bin) Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. -ncpu Number of cores Same as -cpu All Platforms -ngpu Number of GPUs N: Integer, Identifies the number of GPU cards to be used for the solution. Default = 1. Maximum = 8 All Platforms -nlrestart Subcase ID Restart an explicit dynamic solution sequence from specified Subcase ID. If Subcase ID is not specified, it will restart from the first explicit dynamic subcase ending with error in previous run. Note: The explicit dynamic solution sequence is a series of explicit dynamic subcases (ANALYSIS=EXPDYN) linked by CNTNLSUB. All Platforms -np Total number of MPI processes for MPI runs Total number of MPI processes to be used in MPI runs in SPMD. Even if multiple nodes are used in a cluster MPI run, -np still indicates the total number of MPI process for the entire run across multiple cluster nodes. Note: If -nt is not defined, then it is recommended that -np should be set lower than the total number of available cores. If -nt is specified in addition to -np, then the it is recommended that -np*-nt should not exceed the total number of available cores. For more detailed information, refer to the Hybrid Shared/Distributed Memory Parallelization (SPMD). (Example: optistruct infile.fem –ddm –np 4) Not all platforms are supported. Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD) for the list of supported platforms. -nproc Number of cores Same as -cpu All Platforms -nt Number of cores Same as -cpu All Platforms -nthread Number of cores Same as -cpu All Platforms -optskip N/A Submit an analysis run without performing check on optimization data (skip reading all optimization related cards). Cannot be used with –check or –restart. (Example: optistruct infile.fem -optskip) All Platforms -out N/A Echos the output file to the screen. This takes precedence over the SCREEN I/O Options Entry. (Example: optistruct infile.fem -out) All Platforms -outfile Prefix for output filenames Option to direct the output files to a directory different from the one in which the input file exists. If such a directory does not exist, the last part of the path is assumed to be the prefix of the output files. This takes precedence over the OUTFILE I/O Options Entry. (Example: optistruct infile.fem -outfile results; here OptiStruct will output results.out, etc.) All Platforms -proc Number of cores Same as -cpu All Platforms -radopt Run Radioss optimization in OptiStruct Option to run Radioss optimization in OptiStruct. A Radioss optimization file <name>.radopt should be input to OptiStruct and the optional –radopt run option may be specified to request an optimization run for a Radioss input deck. Note: The Radioss Starter and input files supporting the optimization input should be available in the same directory as the <name>.radopt file. Refer to Design Optimization in the User Guide for more information. All Platforms -ramdisk Size of virtual disk (In MB, by default) See Comment 5 Option to specify area in RAM allocated to store information which otherwise would be stored in scratch files on the hard drive. The upper limit of RAMDISK for Compute Console (ACC) or OptiStruct script is 10,000,000 (10 TB). (Example: optistruct infile.fem –ramdisk 800) For a more detailed description, see the RAMDISK setting on SYSSETTING I/O Options Entry. All Platforms -reanal Density threshold This option can only be used in combination with -restart. Inclusion of this option on a restart run will cause the last iteration to be reanalyzed without penalization. If the "density threshold" given is less than the value of MINDENS (default = 0.01) used in the optimization, all elements will be assigned the densities they had during the final iteration of the optimization. As there is no penalization, stiffness will now be proportional to density. If the "density threshold" given is greater than the value of MINDENS, those elements whose density is less than the given value will have density equal to MINDENS, all others will have a density of 1.0. Example: optistruct infile.fem -restart -reanal 0.3) All Platforms -restart filename.sh Specify a restart run. If no argument is provided, OptiStruct will look for the restart file, which will have the same root as the input file with the extension .sh. If you enter an argument on PC, you will need to provide the full path to the restart file including the file name. Cannot be used with –check, -analysis or –optskip. (Example: optistruct infile.fem -restart); here OptiStruct looks for the restart file infile.sh. (Example: optistruct infile.fem –restart C:\oldrun\old_infile.sh); here OptiStruct looks for the restart file old_infile.sh. All Platforms -rnp Number of processors Number of processors to be used in Hybrid Shared/Distributed Memory Parallelization (SPMD) for EXPDYN analysis. (Example: optistruct infile.fem –mpi –rnp 4) All Platforms -rnt Number of cores Number of cores to be used for OptiStruct SMP for EXPDYN analysis. (Example: optistruct infile.fem -rnt 2) All Platforms -rsf Safety factor Specify a safety factor over the limit of allocated memory. Not applicable when -maxlen is used. (Example: optistruct infile.fem –rsf 1.2) (Example: optistruct infile.fem –len 32 –rsf 1.2) (Example: optistruct infile.fem –core out –rsf 1.2) All Platforms -savelog N/A Saves the screen output to a permanent file named <filename>.log. This can be useful during debugging, as OptiStruct prints some messages only to screen. The SCREEN I/O Options Entry is required in conjunction for maximum information to be printed to the .log file. All Platforms -scr or -tmpdir path, filesize=n, slow=1 Option to choose directories in which the scratch files are to be written. filesize=n and slow=1 arguments are optional. Multiple arguments may be separated by a comma. path Provide the path to the directory for scratch file storage. filesize=n Defines the maximum file size (in GB) that may be written to that location. slow=1 Indicates a network drive. (Example: optistruct infile.fem –scr filesize=2,slow=1,/network_dir/tmp) Multiple scratch directories may be defined through repeated instances of –tmpdir or –scr. (Example: optistruct infile.fem –tmpdir C:\tmp –tmpdir filesize=2,slow=1,Z:\network_drive\tmp) This overwrites the environment variableOS_TMP_DIR, and the TMPDIR definition in the I/O Options Entry section of the input deck. For a more detailed description, see the TMPDIR I/O Options Entry. All Platforms -scrfmode basic, buffered, unbuffer, smbuffer, stripe, mixfcio Option to select different mode of storing scratch files for linear solver (especially for out-of-core and minimum-core solution modes). Multiple arguments may be comma separated. (Example: optistruct infile.fem –scrfmode buffered, stripe – tmpdir  C:\tmp) For a description of the arguments, see the SCRFMODE setting on SYSSETTING I/O Options Entry. All Platforms -testmpi N/A Check if MPI is configured properly and if the SPMD version of the OptiStruct executables is available for this system. (Example: optistruct infile.fem –mpi –np 4 –mpipath /apps/hpmpi/bin -testmpi) All Platforms -sp N/A Option to select the Single Precision executable for the run. This allows you to select the 64 bit Integer 32 bit Floating Point Build for either the SMP or MPI runs. All Platforms -v Version Controls the version of the OptiStruct executable to be used. The OptiStruct executables are available in the following folder within both Linux and Windows executables. Linux$ALTAIR_HOME/hwsolvers/optistruct/bin/linux64

Example Linux executable name: optistruct_2020_linux64_impi
To pick this executable, use the following run options:
-ddm -v 2020
The Version argument for -v option is anything that is present between optistruct_ and _linux64 within the OptiStruct executable name (in this example, to pick this executable the Version argument should be 2020). By default, the serial executable is picked, and if MPI or GPU executables are required, then corresponding run options, like, -ddm or -gpu, should be used in conjunction with -v.
Windows
\$ALTAIR_HOME\hwsolvers\optistruct\bin\win64
Example Windows executable name: optistruct_2020_win64_impi.exe
To pick this executable, use the following run options:
-ddm -v 2020
The Version argument for -v option is anything that is present between optistruct_ and _win64 within the OptiStruct executable name. By default, the serial executable is picked, and if MPI or GPU executables are required, then corresponding run options, like, -ddm or -gpu, should be used in conjunction with -v.
Note: By default, the highest version from within the available executables will be used if the -v option is not defined.

-version N/A Checks version and build time information from OptiStruct. All Platforms
-xml N/A Option to specify that the input file is an XML file for a multibody dynamics solution sequence. All Platforms

1. Any arguments containing spaces or special characters must be quoted in {}, for example: -mpipath {C:\Program Files\MPI}. File paths on Windows may use backward "\" or forward slash "/" but must be within quotes when using a backslash "\".
2. Currently, the solver executable (OptiStruct) does not have a specific limit on the number of processors/cores assigned to the SMP part of the run (-nt/-nthread).
4. The order of the above options is arbitrary. However, options for which arguments are optional should not be followed immediately by the INPUT_FILE_NAME argument.
5. In case of memory related options (-len, -fixlen, -minlen, -maxlen and -ramdisk), the default unit of memory is MB. However, suffixes M/m and G/g can be used to represent memory in MB and GB, respectively.
Examples: -minlen 2G or -minlen 2000M