# Difference between revisions of "Gaussian"

(→Disk Usage) |
(→Disk Usage) |
||

Line 94: | Line 94: | ||

By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level with e.g. job id and job name for clarity, if not done so already by the queueing system. |
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level with e.g. job id and job name for clarity, if not done so already by the queueing system. |
||

− | Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe |
+ | Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set. |

For information on how much node-local disk space is available at your cluster and how to request a certain amount of node-local disk space for your calculation by the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below. |
For information on how much node-local disk space is available at your cluster and how to request a certain amount of node-local disk space for your calculation by the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below. |

## Revision as of 12:28, 18 April 2015

Key facts | |
---|---|

Module name | chem/gaussian |

Availability | bwForCluster_Chemistry |

License | commercial |

Citing | See Gaussian manual |

Links | Homepage; Manual; IOps Reference |

Graphical interface | See Gaussview |

## Contents

# 1 Description

**Gaussian** is a general purpose *quantum chemistry* software package for *ab initio* electronic structure calculations. It provides:

- ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);
- basic excited state calculations such as TDHF or TDDF;
- coupled multi-shell QM/MM calculations (ONIOM);
- geometry optimizations, transition state searches, molecular dynamics calculations;
- property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as
- shared-memory parallel versions for almost all kind of jobs.

For more information on features please visit Gaussian's *Overview of Capabilities and Features* web page.

# 2 Versions and Availability

A list of versions currently available on the bwForCluster Chemistry can be obtained from the Cluster Information System (CIS):
{{#widget:Iframe
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussian
|width=99%
|height=200
|border=1
}}

On the command line of a particular bwHPC cluster a list of all available Gaussian versions is displayed by command

$ module avail chem/gaussian

## 2.1 Parallel computing

The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement

%NProcShare=PPN

in section *Link 0 commands* before the *route section* at the beginning of the Gaussian input file. *PPN* should be replaced by the number of parallel cores. This value **must** be identical to the *ppn* value specified when requesting resources from the queueing system. The installed Gaussian binaries are shared-memory parallel. Therefore only single node jobs do make sense. Without *NProcShare* statement the serial version of Gaussian is selected.

# 3 Usage

## 3.1 Loading the Module

You can load the default version of *Gaussian* with command:

$ module load chem/gaussian

The Gaussian module does not depend on any other module (no dependencies).

If you wish to load a specific version you may do so by specifying the version explicitly, e.g.

$ module load chem/gaussian/g09.D.01

to load version *g09.D.01* of Gaussian.

## 3.2 Running Gaussian interactively

After loading the Gaussian module you can run a quick interactive example by executing

$ time g09 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com

In most cases running Gaussian requires setting up the command input file and piping that input into g09.

## 3.3 Creating Gaussian input files

For documentation about how to construct input files see the Gaussian manual. In addition the program Gaussview is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.

## 3.4 Disk Usage

By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level with e.g. job id and job name for clarity, if not done so already by the queueing system.

Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.

For information on how much node-local disk space is available at your cluster and how to request a certain amount of node-local disk space for your calculation by the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.

Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.

# 4 Examples

## 4.1 Single node jobs

### 4.1.1 Queueing system template provided by Gaussian module

The Gaussian *module* provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:

$ ws_allocate calc_repo 30; cd $(ws_find calc_repo) $ mkdir my_first_job; cd my_first_job $ module load chem/gaussian $ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./ $ msub bwforcluster-gaussian-example.moab

The last step submits the job example script *bwforcluster-gaussian-example.moab* to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the local file system ($TMPDIR) of that particular compute node. Please carefully read this *local file system* documentation as well as the comments in the queueing system example script *bwforcluster-gaussian-example.moab*.

# 5 Version-Specific Information

For specific information about version *VERSION* see the information available via the module system with the command

$ module help chem/gaussian/VERSION

Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.