TSUBAME3.0 User's Guide

 

 

 

 

 

 

 

 

 

rev 5

9/25/2017

 


 


Table of contents

1.    Introduction to TSUBAME3.0 1

1.1.      System architecture 1

1.2.      Compute node configuration 2

1.3.      Software configuration 3

1.4.      Storage configuration 3

2.    System environment 4

2.1.      Get an account 4

2.2.      Login 4

2.3.      Password Management 5

2.4.      Storage service (CIFS) 5

3.    User Environment 6

3.1.      Change User Environment 6

3.1.1. List the Available Modules 6

3.1.2. Display the named module information 9

3.1.3. Load the named module 9

3.1.4. List all the currently loaded modules 9

3.1.5. Unoad the named module 9

3.1.6. Remove all modules 10

3.2.      Usage in job script 10

3.3.      Intel Compiler 11

3.3.1. Compiler options 11

3.3.2. Recommended optimization options 12

3.3.3. Intel 64 architecture memory model 12

3.4.      Parallelization 13

3.4.1. Thread parallel (OpenMP, Automatic parallelization) 13

3.4.2. Process parallel (MPI) 13

4.    Job Scheduler 15

4.1.      Available resource type 15

4.2.      Job submission 16

4.2.1. Job submission flow 16

4.2.2. Creating job script 16

4.2.3. Job script - serial job/GPU job 18

4.2.4. Job script - SMP job 18

4.2.5. Job script - MPI job 19

4.2.6. Job script - Hybrid parallel 21

4.2.7. Job submission 22

4.2.8. Job status 22

4.2.9. Job delete 23

4.2.10. Job results 24

4.3.      Interactive job 24

4.3.1. x-forwarding 24

4.4.      Storage system 25

4.4.1. Home Directory 25

4.4.2. High-speed storage area 25

4.4.3. Local scratch area 25

4.4.4. Shared scratch area 25

4.5.      SSH login 26

5.    ISV application 28

5.1.      ANSYS 30

5.2.      Fluent 32

5.3.      ABAQUS 36

5.4.      ABAQUS CAE 38

5.5.      Marc & Mentat / Dytran 39

5.5.1. Marc & Mentat / Dytran 39

5.5.2. Marc & Mentat / Dytran Documentations 39

5.5.3. Marc 39

5.5.4. Mentat 39

5.6.      Nastran 41

5.7.      Patran 43

5.8.      Gaussian 44

5.9.      GaussView 46

5.10.    AMBER 48

5.11.    Materials Studio 52

5.11.1. License connection setting 52

5.11.2. License Usage Status 53

5.11.3. Start up Materials Studio 54

5.12.    Discovery Studio 55

5.12.1. License connection setting 55

5.12.2. License Usage Status 56

5.12.3. Start up Discovery Studio 57

5.12.4. User authentication 57

5.13.    Mathematica 58

5.14.    Maple 60

5.15.    AVS/Express 62

5.16.    AVS/Express PCE 63

5.17.    LS-DYNA 64

5.17.1. Overview LS-DYNA 64

5.17.2. LS-DYNA 64

5.18.    LS-PrePost 68

5.18.1. Overview LS-PrePost 68

5.18.2. LS-PrePost 68

5.19.    COMSOL 70

5.20.    Schrodinger 71

5.21.    MATLAB 72

5.22.    Allinea Forge 73

6.    Freeware 74

6.1.      Computational chemistry Software 75

6.1.1. GAMESS 75

6.1.2. Tinker 75

6.1.3. GROMACS 76

6.1.4. LAMMPS 77

6.1.5. NAMD 77

6.1.6. CP2K 78

6.2.      CFD software 79

6.2.1. OpenFOAM· 79

6.3.      Machine learning, big data analysis software 80

6.3.1. CuDNN 80

6.3.2. NCCL 80

6.3.3. Caffe 80

6.3.4. Chainer 80

6.3.5. TensorFlow 81

6.3.6. R 81

6.3.7. Apache Hadoop 81

6.4.      Visualization software 83

6.4.1. POV-Ray 83

6.4.2. ParaView 83

6.4.3. VisIt 83

6.5.      Other freeware 84

6.5.1. gimp 84

6.5.2. gnuplot 84

6.5.3. tgif 84

6.5.4. ImageMagick 84

6.5.5. pLaTeX2e 85

6.5.6. Java SDK 85

6.5.7. PETSc 85

6.5.8. fftw 85

Revision History 86


1. Introduction to TSUBAME3.0

1.1.         System architecture

This system is a shared computer that can be used from various research and development departments at Tokyo Institute of Technology. Each compute node and storage system are connected to the high-speed network by Omni-Path and are now connected to the Internet at a speed of 10 Gbps, and in the future they will be connected to the Internet at a speed of 10 Gbps via SINET5 (as of August, 2017). The system architecture of TSUBAME 3.0 is shown below.

 

 


 

1.2.         Compute node configuration

The computing node of this system is a blade type large scale cluster system consisting of SGI ICE XA 540 nodes. One compute node is equipped with two Intel Xeon E5-2680 v4 (2.4 GHz, 14 core), and the total number of cores is 15,120 cores. The main memory capacity is 256 GiB per compute node, total memory capacity is 135 TiB. Each compute node has 4 ports of Intel Omni-Path interface and constitutes a fat tree topology by Omni-Path switch.

TSUBAME3.0の完成予想図

 

The basic specifications of TSUBAME 3.0 machine are as follows.

 

 

Compute Node  x 540

Configuration

per node

CPU

Intel Xeon E5-2680 v4 2.4GHz x 2CPU

core/thread

14cores / 28threads x 2CPU

Memory

256GiB

GPU

NVIDIA TESLA P100 for NVlink-Optimized Servers x 4

SSD

2TB

Interconnect

Intel Omni-Path HFI 100Gbps  x 4

 

 


 

1.3.         Software configuration

The operating system (OS) of this system has the following environment.

 

l  SUSE Linux Enterprise Server 12 SP2

 

Regarding the application software that can be used in this system, please refer to "6. ISV application" and "7. Freeware".

 

 

1.4.         Storage configuration

This system has high speed / large capacity storage for storing various simulation results. On the compute node, the Lustre file system is used as the high speed storage area, and the home directory is shared by GPFS + cNFS. In addition, 2 TB SSD is installed as local scratch area in each compute node. A list of each file system that can be used in this system is shown below.

 

Usage

Mount point

Capacity

FileSystem

Notes

Home directory

/home

40TB

GPFS+cNFS

 

Shared space for applications

/apps

Massively parallel I/O spaces 1

/gs/hs0

4.8PB

Lustre

 

Massively parallel I/O spaces 2

/gs/hs1

4.8PB

Lustre

 

Massively parallel I/O spaces 3

/gs/hs2

4.8PB

Lustre

 

Local scratch

/scr

1.9TB/node

xfsSSD)

 

 

2. System environment

2.1.         Get an account

In order to use this system, it is necessary to registered user account. In the case of Tokyo Tech University members, General members, temporary memers, the application form is different. Refer to "TSUBAME portal User's Guide" for operation of each user category.

 

2.2.         Login

You need to upload the SSH public key to access the login node. Please refer to "TSUBAME portal User's Guide" for operation of public key registration. Once registration of the SSH public key is completed, you can log in to the login node. Login to the login node is automatically distributed by the load balancer. The usage image is shown below.

 

 

Connect to the login node with SSH. And you can transfer files using SFTP.

login.t3.gsic.titech.ac.jp


To connect to a specific login node, log in with the following host name (FQDN).

login0.t3.gsic.titech.ac.jp

login1.t3.gsic.titech.ac.jp

 

In the first connection, the following message may be output depending on client's setting. In that case enter "yes".

The authenticity of host ' login0.t3.gsic.titech.ac.jp (131.112.3.21)' can't be established.

ECDSA key fingerprint is SHA256:RImxLoC4tBjIYQljwIImCKshjef4w7Pshjef4wtBj

Are you sure you want to continue connecting (yes/no)?

 

 

(note)  The login note has 4 GB memory limit per process. Please execute the program via the job scheduler.

Regarding the job scedular that can be used in this system, please refer to "4. Job Schedular".

 

2.3.         Password Management

The user account of this system is managed by the LDAP, and authentication in the system is done by SSH key authentication. For this reason, you do not need a password to use the compute nodes, but you will need a password to access the Windows/Mac terminal in the university, storage system.

If you need to change the password, please change from the TSUBAME3.0 portal. The rules of available passwords are described on the TSUBAME3.0 portal password setting page.

 

2.4.         Storage service (CIFS)

In TSUBAME 3.0, users can access the high-speed storage area from the Windows / Mac terminal in the university using the CIFS protocol. You can access it with the following address.

 

 \\gshs.t3.gsic.titech.ac.jp

 

It will be accessible with TSUBAME3.0 account and the password set in the portal. When you access from Windows, please specify the TSUBAME domain as below.

 

UserName     TSUBAME\<TSUBAME3.0 Account Name>

Password     <TSUBAME3.0 Account Password>

 

The folder name corresponds to /gs/hs0, /gs/hs1, /gs/hs2 of the compute node, and it is T3_HS0, T3_HS1, T3_HS2. Please access the folder purchased as a group disk.

 

3. User Environment

3.1.         Change User Environment

In this system, you can switch the compiler and application use environment by using the module command.

 

3.1.1. List the Available Modules

You can check available modules with "module avail" or "module ava".

 

$ module avail

 

The currently available module environment is shown below.

 

Category

Module name

Description

Notes

Compiler

intel/17.0.4.196

Intel Compiler 17.0.4.196

 

intel/16.0.4.258

Intel Compiler 16.0.4.258

 

pgi/17.5

PGI Compiler 17.5

 

cuda/8.0.44

CUDA8.0.44

 

cuda/8.0.61

CUDA8.0.61

 

MPI library

intel-mpi/17.3.196

Intel MPI2017.3.196

 

openmpi/2.1.1

OpenMPI2.1.1

 

mpt/2.16

SGI MPT 2.16

 

Debugging and Performance Analysis Tools

intel-ins/17.1.3.510645

Intel Inspector 2017 Update 3 (build 510645)

 

intel-itac/17.3.030

Intel Trace Analyzer and Collector 2017.3.030

 

intel-vtune/17.4.0.518798

Intel VTune Amplifier XE 2017 Update 4

 

allinea/7.0.5

Allinea Forge 7.0.5

 

papi/5.5.1

PAPI 5.5.1

 

perfsuite/1.1.4

SGI PerfSuite 1.1.4

 

 

 

 

 

 

Category

Module name

Description

Notes

ISV application

abaqus/2017

ABAQUS 2017

 

nastran/2017.1

Nastran 2017.1

 

dytran/2017

Dytran 2017

 

marc_mentat/2017

Marc/Mentat 2017

 

patran/2017.0.2

Patran 2017.0.2

 

lsdyna/R9.1.0

LS-DYNA R9.1.0

 

lsprepost/4.3

LS-PrePost 4.3

 

ansys/R18.1

ANSYS R18.1

 

comsol/53

COMSOL 5.3

 

gaussian16/A03

Gaussian 16 A.03

 

gaussian16_linda/A03

Gaussian 16 with linda

 

gaussview/6

GaussView 6

 

amber/16

Amber16

 

amber/16_cuda

Amber16cuda8対応

 

schrodinger/Feb-17

Schrodinger Feb-17

 

matlab/R2017a 

Matlab R2017a

 

mathematica/11.1.1

Mathematica 11.1.1

 

maple/2016.2

Maple 2016.2

 

avs/8.4

AVS/Express

AVS/Express PCE 8.4

 

 


 

 

Category

Module name

Description

Notes

Freeware

gamess/apr202017r1

GAMESSApril 20 2017 R1

 

tinker/8.1.2

TINKER 8.1.2

 

gromacs/2016.3

GROMACS 2016.3

 

lammps/31mar2017

LAMMPS31 Mar 2017

 

namd/2.12

NAMD 2.12

 

cp2k/4.1

CP2K 4.1

 

openfoam/4.1

OpenFOAM 4.1

 

cudnn/5.1

NVIDIA cuDNN 5.1

 

cudnn/6.0

NVIDIA cuDNN 6.0

 

cudnn/7.0

NVIDIA cuDNN 7.0

 

nccl/1.3.4

NVIDIA NCCL 1.3.4

 

python-extension/2.7.9

python-extension2.7.9

 

r/3.4.1

R3.4.1

 

hadoop/2.8.0

Hadoop 2.8.0

 

pov-ray/3.7.0.3

POV-Ray 3.7.0.3

 

paraview/5.0.1

ParaView 5.0.1

 

visit/2.12.3

VisIt 2.12.3

 

gimp/2.8.22

GIMP 2.8.22

 

gnuplot/5.0.6

gnuplot 5.0.6

 

tgif/4.2.5

Tgif 4.2.5

 

imagemagick/7.0.6

ImageMagick 7.0.6

 

texlive/20170704

TeX Live 20170704

 

xpdf/3.04

Xpdf 3.04

 

a2ps/4.14

a2ps 4.14

 

tmux/2.5

tmux 2.5

 

jdk/1.8.0_131

JDK 8 Update 131

 

jdk/1.8.0_144

JDK 8 Update 144

 

php/7.1.6

PHP 7.1.6

 

petsc/3.7.6/real

PETSc 3.7.6 real

 

petsc/3.7.6/complex

PETSc 3.7.6complex

 

fftw/2.1.5

FFTW 2.1.5

 

fftw/3.3.6

FFTW 3.3.6

 

 

 

 

3.1.2.Display the named module information

One can display the short information by issuing the command "module whatis MODULE".

 

$ module whatis intel/17.0.4.196

intel/17.0.4.196     : Intel Compiler version 17.0.4.196 (parallel_studio_xe_2017) and MKL

 

 

3.1.3.Load the named module

One can load the named module by issuing the command "module load MODULE".

 

$ module loadintel/17.0.4.196

 

Please use the same module that you used at compile time for the module to be loaded in the job script.

 

3.1.4.List all the currently loaded modules

One can list the modules currently loaded by issuing the command "module list".

 

$ module list

Currently Loaded Modulefiles:

1) intel/17.0.4.196   2) cuda/8.0.61

 

 

3.1.5.Unoad the named module

One can unload the named module by issuing the command "module unload MODULE".

 

$ module list

Currently Loaded Modulefiles:

  1) intel/17.0.4.196   2) cuda/8.0.61

$ module unload cuda

$ module list

Currently Loaded Modulefiles:

1) intel/17.0.4.196

 

 

 

3.1.6.Remove all modules

One can remove all modules by issuing the command "module purge".

 

$ module list

Currently Loaded Modulefiles:

 1) intel/17.0.4.196   2) cuda/8.0.61

$ module purge

$ module list

No Modulefiles Currently Loaded.

 

 

3.2.         Usage in job script

When executing the module command in the job script, it is necessary to initialize the module command in the job script as follows.

 

[sh, bash]

. /etc/profile.d/modules.sh

module load intel/17.0.4.196

 

[csh, tcsh]

source /etc/profile.d/modules.csh

module load intel/17.0.4.196

 

 

 

 


 

3.3.         Intel Compiler

In this system, you can use Intel compiler, PGI compiler and GNU compiler as compiler. The Intel compiler commands are as follows.

 

 

Language

Command

ifort

Fortran 77/90/95

$ ifort [option]source_file

icc

C

$ icc [option] source_file

icpc

C++

$ icpc [option] source_file

 

To use it, please load "intel" with the module command.

If you specify the --help option, a list of compiler options is displayed.

 

3.3.1.Compiler options

The compiler options are shown below.

 

Option

Description

-O0

Disables all optimizations. Using for debugging,etc.

-O1

Affects code size and locality. Disables specific optimizations.

-O2

Default optimizations. Same as -O.

Enables optimizations for speed, including global code scheduling, software pipelining, predication,

-O3

Aggressive optimizations for maximum speed (, but does not guarantee higher performance).

Optimization including data prefetching, scalar replacement, loop transformations.

-xCORE-AVX2

The generated executable will not run on non-Intel processors and it will not run on Intel processors that do not support Intel AVX2 instructions.

-xSSE4.2

The generated executable will not run on non-Intel processors and it will not run on Intel processors that do not support Intel SSE4.2 instructions.

-xSSSE3

The generated executable will not run on non-Intel processors and it will not run on Intel processors that do not support Intel SSE3 instructions.

-qopt-report=n

Generates optimizations report and directs to stderr.

n=0 : disable optimization report output

n=1 : minimum report output

n=2 : medium output (DEFAULT)

n=3 : maximum report output

-fp-model precise

Tells the compiler to strictly adhere to value-safe optimizations when implementing floating-point calculations. It disables optimizations that can change the result of floating-point calculations. These semantics ensure the accuracy of floating-point computations, but they may slow performance.

-g

Produces symbolic debug information in object file (implies -O0 when another optimization option is not explicitly set)

-traceback

Tells the compiler to generate extra information in the object file to provide source file traceback information when a severe error occurs at runtime. Specifying -traceback will increase the size of the executable program, but has no impact on runtime execution speeds.

 

3.3.2.Recommended optimization options

The recommended optimization options for compilation of this system are shown below.

 

Option

Description

-O3

Aggressive optimizations for maximum speed (, but does not guarantee higher performance). Optimization including data prefetching, scalar replacement, loop transformations.

-xCORE-AVX2

 

The generated executable will not run on non-Intel processors and it will not run on Intel processors that do not support Intel AVX2 instructions.

 

If the performance of the program deteriorates by using the above option, lower the optimization level to -O2 or change the vectorization option. If the results do not match, try the floating point option as well.

 

3.3.3.Intel 64 architecture memory model

Tells the compiler to use a specific memory model to generate code and store data.

 

Memory model

Description

small (-mcmodel=small)

Tells the compiler to restrict code and data to the first 2GB of address space. All accesses of code and data can be done with Instruction Pointer (IP)-relative addressing.

medium (-mcmodel=medium)

Tells the compiler to restrict code to the first 2GB; it places no memory restriction on data. Accesses of code can be done with IP-relative addressing, but accesses of data must be done with absolute addressing.

large (-mcmodel=large)

Places no memory restriction on code or data. All accesses of code and data must be done with absolute addressing.

 

When you specify option -mcmodel=medium or -mcmodel=large, it sets option -shared-intel. This ensures that the correct dynamic versions of the Intel run-time libraries are used.

If you specify option -static-intel while -mcmodel=medium or -mcmodel=large is set, an error will be displayed.

 

<some lib.a library>(some .o): In Function <function>:

  : relocation truncated to fit: R_X86_64_PC32 <some symbol>

…………………

  : relocation truncated to fit: R_X86_64_PC32 <some symbol>

 

This option tells the compiler to use a specific memory model to generate code and store data. It can affect code size and performance. If your program has COMMON blocks and local data with a total size smaller than 2GB, -mcmodel=small is sufficient. COMMONs larger than 2GB require-mcmodel=medium or -mcmodel=large. Allocation of memory larger than 2GB can be done with any setting of -mcmodel.

IP-relative addressing requires only 32 bits, whereas absolute addressing requires 64-bits. IP-relative addressing is somewhat faster. So, the small memory model has the least impact on performance.

 

3.4.         Parallelization

3.4.1.Thread parallel (OpenMP, Automatic parallelization)

The command format when using OpenMP, automatic parallelization is shown below.

 

 

Language

Command

OpenMP

Fortran 77/90/95

$ ifort -qopenmp [option] source_file

C

$ icc -qopenmp [option] source_file

C++

$ icpc -qopenmp [option] source_file

Automatic Parallelization

Fortran 77/90/95

$ ifort -parallel [option] source_file

C

$ icc -parallel [option] source_file

C++

$ icpc -parallel [option] source_file

 

‘-qopt-report-phase=openmp’: Reports loops, regions, sections, and tasks successfully parallelized.

-qopt-report-phase=par’: Reports which loops were parallelized.

 

3.4.2.Process parallel (MPI)

The command format when MPI is used is shown below. When using, please read each MPI with the module command.

 

MPI Library

Language

Command

Intel MPI

Fortran 77/90/95

$ mpiifort[option] source_file

C

$ mpiicc [option] source_file

C++

$ mpiicpc[option] source_file

Open MPI

Fortran 77/90/95

$ mpifort[option] source_file

C

$ mpicc [option] source_file

C++

$ mpicxx [option] source_file

SGI MPT 

Fortran 77/90/95

$ mpif90 [option] source_file

C

$ mpicc [option] source_file

C++

$ mpicxx [option] source_file

 

 

4. Job Scheduler

On this system, UNIVA Grid Engine manages the running and scheduling of jobs.

 

4.1.         Available resource type

In this system, a job is executed using a logically divided computing node called "resource type". When submitting a job, specify how many resource types to use (ex: -l f_node = 2). A list of available resource types is shown below.

 

type

Resource type Name

Physical

CPU cores

Memory (GB)

GPUs

F

f_node

28

240

4

H

h_node

14

120

2

Q

q_node

7

60

1

C1

s_core

1

7.5

0

C4

q_core

4

30

0

G1

s_gpu

2

15

1

 

 

 

 

 

 

 

 

 

 

· "Physical CPU Cores", "Memory (GB)", "GPUs" are the available resources per resource type.

· Up to 72 same resource types. Resource type combinations are not available.

· Maximum run time is 24 hours.

 

 


 

4.2.         Job submission

To execute the job in this system, log in to the login node and execute the qsub command.

 

4.2.1.Job submission flow

In order to submit a job, create and submit a job script. The submission command is "qsub".

 

Order

Description

1

Create job script

2

Submit job using qsub

3

Status check using qstat

4

Cancel job using qdel

5

Check job result

 

The qsub command confirms billing information (TSUBAME 3 points) and accepts jobs.

 

4.2.2.Creating job script

 Here is a job script format:

 

#!/bin/sh

  #$ -cwd

  #$ -l [Resource type Name] =[Number]

  #$ -l h_rt=[Maximum run time]

  #$ -p [Priority]

[Initialize module environment]                                         

[Load the relevant modules needed for the job]                                 

[Your program]

 

  In a shell script, you can set the qsub options in lines that begin with #$. This an alternative to passing them with the qsub command. You should always specify "Resource type" and "Maximum run time".

 

 

 

 

 

 

 

The option used by qsub is following.

 

Option

Description

-l [Resource type Name] =[Number]

(Required)

Specify the resource type.

-l h_rt=[Maximum run time] 

(Required)

specify the maximum run time (hours, minutes and seconds)

You can specify it like HH: MM: SS or MM: SS or SS.

-N

name of the job (Script file name if not specified)

-o

name of the standard output file

-e

name of the standard error output file

-m

Will send email when job ends or aborts. The conditions for the -m argument include:

  a: mail is sent when the job is aborted.

  b: mail is sent when the job begins.

  e: mail is sent when the job ends.

It is also possible to combine like abe.

-M

 Email address to send email to

-p

(Premium Options)

Specify the job execution priority. If -4 or -3 is specified, a charge factor higher than -5 is applied.

-5: Standard execution priority. (Default)

-4: The execution priority is higher than -5 and lower than -3.

-3: Highest execution priority.

 


 

4.2.3.Job script - serial job/GPU job

The following is an example of a job script created when executing a single job (job not parallelized) or GPU job. For GPU job, please load necessary modules such as CUDA environment.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

#!/bin/sh

## Run in current working directory

#$ -cwd

## Resource type F: qty 1

#$ -l f_node=1

## maximum run time

#$ -l h_rt=1:00:00

#$ -N serial

## Initialize module command

. /etc/profile.d/modules.sh

# Load CUDA environment

module load cuda

## Load Intel compiler environment

module load intel

./a.out

 

4.2.4.Job script - SMP job

An example of a job script created when executing a SMP parallel job is shown below. Hyper-threading is enabled for compute nodes. Please explicitly specify the number of threads to use.

 

1

2

3

4

5

6

7

8

9

10

11

12

#!/bin/sh

#$-cwd

## Resource type F: qty 1

#$ -l f_node=1

#$ -l h_rt=1:00:00

#$ -N openmp

. /etc/profile.d/modules.sh

module load cuda

module load intel

## 28 threads per node

export OMP_NUM_THREADS=28

./a.out

 

4.2.5.Job script - MPI job

An example of a job script created when executing an MPI parallel job is shown below. Please specify an MPI environment according to the MPI library used by you for MPI jobs as follows.

 

Intel MPI

1

2

3

4

5

6

7

8

9

10

11

12

13

#!/bin/sh

#$-cwd

## Resource type F: qty 4

#$ -l f_node=4

#$ -l h_rt=1:00:00

#$ -N flatmpi

. /etc/profile.d/modules.sh

module load cuda

module load intel

## Load Intel MPI environment

module load intel-mpi

## 8 process per node, all MPI process is 32

mpiexec.hydra -ppn 8 -n 32 ./a.out

 

 

OpenMPI

1

2

3

4

5

6

7

8

9

10

11

12

13

#!/bin/sh

#$-cwd

## Resource type F: qty 4

#$ -l f_node=4

#$ -l h_rt=1:00:00

#$ -N flatmpi

. /etc/profile.d/modules.sh

module load cuda

module load intel

## Load Open MPI environment

module load openmpi

## 8 process per node, all MPI process is 32

mpirun -npernode 8 -n 32 ./a.out


 

SGI MPT

1

2

3

4

5

6

7

8

9

10

1

12

13

#!/bin/sh

#$-cwd

## Resource type F: qty 4

#$ -l f_node=4

#$ -l h_rt=1:00:00

#$ -N flatmpi

. /etc/profile.d/modules.sh

module load cuda

module load intel

## Load SGI MPT environment

module load mpt

## 8 process per node, all MPI process is 32

mpiexec_mpt -ppn 8 -n 32 ./a.out

 

- The file of the node list assigned to the submitted job is stored in $PE_HOSTFILE.

 

$ echo $PE_HOSTFILE

/var/spool/uge/r6i0n4/active_jobs/4564.1/pe_hostfile

$ cat /var/spool/uge/r6i0n4/active_jobs/4564.1/pe_hostfile

r6i0n4 28 all.q@r6i0n4 <NULL>

r6i3n5 28 all.q@r6i3n5 <NULL>

 


 

4.2.6.Job script - Hybrid parallel

An example of a job script created when executing a process/thread parallel (hybrid) job is shown below. Please specify an MPI environment according to the MPI library used by you for MPI jobs as follows.

Intel MPI

1

2

3

4

5

6

7

8

9

10

11

12

13

14

#!/bin/sh

#$-cwd

## Resource type F: qty 4

#$ -l f_node=4

#$ -l h_rt=1:00:00

#$ -N hybrid

. /etc/profile.d/modules.sh

module load cuda

module load intel

module load intel-mpi

## 28 threads per node

export OMP_NUM_THREADS=28

## 1 MPI process per node, all MPI process is 4

mpiexec.hydra -ppn 1 -n 4./a.out

 

OpenMPI

1

2

3

4

5

6

7

8

9

10

11

12

13

14

#!/bin/sh

#$-cwd

## Resource type F: qty 4

#$ -l f_node=4

#$ -l h_rt=1:00:00

#$ -N hybrid

. /etc/profile.d/modules.sh

module load cuda

module load intel

module load openmpi

## 28 threads per node

export OMP_NUM_THREADS=28

## 1 MPI process per node, all MPI process is 4

mpirun -npernode 1 -n 4./a.out


 

4.2.7.Job submission

Job is queued and executed by specifying the job submission script in the qsub command. You can submit a job using qsub as follows.

 

$ qsub -g [TSUBAME3 group] SCRIPTFILE

 

Option

Description

-g

TSUBAME 3 Specify the group name.

Please add as qsub command option, not in script. If group designation is not specified, the job will be "Trial run". "Tiral run" is up to 2 resource types, up to 10 minutes running time and priority -5.

 

4.2.8.Job status

The qstat command is a job status display command

 

$qstat [option]

 

The options used by qstat are following.

 

Option

Description

-r

Displays job resource information.

-j [job-ID]

Display additional information about the job.

 

Here is the result of qstat command.

 

$ qstat

job-IDprior  nameuser   statesubmit/start at  queuejclass  slotsja-task-ID

----------------------------------------------------------------------------------

307 0.55500 sample.sh testuser r 02/12/2015 17:48:10  all.q@r8i6n1A.default32

 

 

 

Item

Description

Job-ID

Job-ID number

Prior

Priority of job

Name

Name of the job

User

ID of user who submitted job

State

'state' of the job

  r  running

 qw  waiting in the queue

  h  on hold

  d  deleting

  t  a transition like during job-start

  s suspended

  S  suspended by the queue

  T has reached the limit of the tail

  E  error

submit/start at

Submit or start time and date of the job

Queue

Queue name

Jclass

job class name

Slot

The number of slot the job is taking.

 

4.2.9.Job delete

 To delete your job, use the qdel command.

 

$ qdel [job-ID]

 

Here is the result of qdel command.

 

$ qstat

job-IDprior  nameuser   statesubmit/start at  queuejclass  slotsja-task-ID

----------------------------------------------------------------------------------

307 0.55500 sample.sh testuser r 02/12/2015 17:48:10  all.q@r8i6n1A.default32

 

$ qdel 307

testuser has registered the job 307 for deletion

 

$ qstat

job-IDprior  nameuser   statesubmit/start at  queuejclass  slotsja-task-ID

----------------------------------------------------------------------------------

 

 

4.2.10.Job results

The standard output is stored in the file "SCRIPTFILE.o[job-ID]" in the job execution directory. The standard error output is "SCRIPTFILE.e[job-ID]".

 

4.3.         Interactive job

To execute an interactive job, use the qrsh command and specify the resource type and running time. After job submission with qrsh, when the job is dispatched, the command prompt will be returned. To exit the interactive job, type exit at the prompt. The usage of interactive job is shown below.

 

$ qrsh -g [TSUBAME3 group] -l [resource type name]=[numbers] -l h_rt=[max running time]

 

If group designation is not specified, the job will be "Trial run". "Tiral run" is up to 2 resource types, up to 10 minutes running time and priority -5.

 

The following is a sample that has been set resource type F 1node, and maximum run time is 10minutes.

 

$  qrsh -g tgz-test00-group -l f_node=1 -l h_rt=0:10:00

Directory: /home/4/t3-test00

Mon Jul 24 02:51:18 JST 2017

 

 

4.3.1.x-forwarding

Only f_node is applicable.

1) qrsh command

2) ssh command(other terminal)

 

$  qrsh -g [TSUBAME3 group] -l [resource type name]=[numbers] -l h_rt=[max running time]

Thu Sep 21 08:17:19 JST 2017

r0i0n0:~>

 

# other terminal

Last login: Thu Sep 21 08:16:49 2017 from XXX.XXX.XXX.XXX

login0:~> ssh r0i0n0 YC

r0i0n0:~> module load <modulefile>

r0i0n0:~> <x-application>

 

 

4.4.         Storage system

In this system, in addition to the home directory, you can also use file systems such as the Lustre file system of the high-speed storage area, the SSD of ​​the local scratch area, and the shared scratch area, BeeGFS On Demand, which creates the SSD for each job.

 

4.4.1.Home Directory

 Up 25 GB of home space is available per user.

 

4.4.2.High-speed storage area

 The high-speed storage area consists of the Lustre file system and you can be used by purchasing it as a group disk. Please refer "TSUBAME portal User's Guide" for the method of purchasing group disk.

 

4.4.3.Local scratch area

Each node has SSD as local scratch disk space available to your job as $TMPDIR .

 

4.4.4.Shared scratch area

Only when using resource type f_node batch job, you can use BeeGFS On Demand, which creates SSD of reserved multiple computing nodes on demand as a shared file system.

To enable BeeGFS on demand, specify f_node in the job script and specify “#$ - v USE_BEEOND=1” for additional. You can use it by referring to /beeond on the compute node. Here is a sample job script.

 

#!/bin/sh

#$ -cwd

#$ -l f_node=4

#$ -l h_rt=1:00:00

#$ -N flatmpi

#$ -v USE_BEEOND=1

. /etc/profile.d/modules.sh

module load cuda

module load intel

module load intel-mpi

mpiexec.hydra -ppn 8 -n 32 ./a.out

 

4.5.         SSH login

You might login using ssh directly to the computing nodes allocated the your job with the resource type f_node. You can check the available nodes with the following command.

 

t3-test00@login0:~> qstat -j 1463

==============================================================

job_number:                 1463

jclass:                     NONE

exec_file:                  job_scripts/1463

submission_time:            07/29/2017 14:15:26.580

owner:                      t3-test00

uid:                        1804

group:                      tsubame-users0

gid:                        1800

supplementary group:        tsubame-users0, t3-test-group00

sge_o_home:                 /home/4/t3-test00

sge_o_log_name:             t3-test00

sge_o_path:            /apps/t3/sles12sp2/uge/latest/bin/lx-amd64:/apps/t3/sles12sp2/uge/latest/bin/lx-amd64:/home/4/t3-test00/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games

sge_o_shell:                /bin/bash

sge_o_workdir:              /home/4/t3-test00/koshino

sge_o_host:                 login0

account:                    2 0 0 0 0 0 600 0 0 1804 1800

cwd:                        /home/4/t3-test00

hard resource_list:         h_rt=600,f_node=1,gpu=4

mail_list:                  t3-test00@login0

notify:                     FALSE

job_name:                   flatmpi

priority:                   0

jobshare:                   0

env_list:                   RGST_PARAM_01=0,RGST_PARAM_02=1804,RGST_PARAM_03=1800,RGST_PARAM_04=2,RGST_PARAM_05=0,RGST_PARAM_06=0,RGST_PARAM_07=0,RGST_PARAM_08=0,RGST_PARAM_09=0,RGST_PARAM_10=600,RGST_PARAM_11=0

script_file:                flatmpi.sh

parallel environment:  mpi_f_node range: 56

department:                 defaultdepartment

binding:                    NONE

mbind:                      NONE

submit_cmd:                 qsub flatmpi.sh

start_time            1:    07/29/2017 14:15:26.684

job_state             1:    r

exec_host_list        1:    r8i6n3:28, r8i6n4:28    <--  Available nodes : r8i6n3r8i6n4

granted_req.          1:    f_node=1, gpu=4

usage                 1:    wallclock=00:00:00, cpu=00:00:00, mem=0.00000 GBs, io=0.00000 GB, iow=0.000 s, ioops=0, vmem=N/A, maxvmem=N/A

binding               1:    r8i6n3=0,0:0,1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0,9:0,10:0,11:0,12:0,13:1,0:1,1:1,2:1,3:1,4:1,5:1,6:1,7:1,8:1,9:1,10:1,11:1,12:1,13, r8i6n4=0,0:0,1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0,9:0,10:0,11:0,12:0,13:1,0:1,1:1,2:1,3:1,4:1,5:1,6:1,7:1,8:1,9:1,10:1,11:1,12:1,13

resource map          1:    f_node=r8i6n3=(0), f_node=r8i6n4=(0), gpu=r8i6n3=(0 1 2 3), gpu=r8i6n4=(0 1 2 3)

scheduling info:            (Collecting of scheduler job information is turned off)

 

5. ISV application

Under the license agreement, users who can use the ISV application are limited. Users other than "1. Student ID / Staff Certificate" belonging to Tokyo Institute of Technology can not use ISV applications other than the following applications listed below.

 

1.       Gaussian/Gauss View

2.       AMBER

3.       Intel Compiler

4.       PGI Compiler

5.       Allinea Forge

 

The ISV application list is shown below.

Software name

Description

ANSYS

Finite element software

Fluent

Finite volume software

ABAQUS

Finite element software

ABACUS CAE

Finite element software

Marc & Mentant / Dytran

Finite element software

Nastran

Finite element software

Patran

Finite element software Pre-Post tool

Gaussian

Computational chemistry Software

GaussView

Computational chemistry Software Pre-Post tool

AMBER

Computational chemistry Software

Materials Studio

Computational chemistry Software

Discovery Studio

Computational chemistry Software

Mathematica

Mathematical symbolic computation program

Maple

Mathematical symbolic computation program

AVS/Express

Visualization software

AVS/Express PCE

Visualization software

LS-DYNA

Finite element software

LS-PrePost

Finite element software Pre-Post tool

COMSOL

Finite element software

Schrodinger

Computational chemistry Software

MATLAB

Mathematical software

Allinea Forge

Debugger

Intel Compiler

Compiler

PGI Compiler

Compiler


 

5.1.         ANSYS

You could run interactive use like in this example.

$ module load ansys/R18.1

$ ansys181

 

$ cd <work directory>

$ ansys181 -i <input file>

 

Type exit to exit.

 

You could submit a batch job like in this example.

$ cd <work directory>

#### in case, sample.sh

$ qsub sample.sh

 

The following is a sample job script: MPI parallel

#!/bin/bash

#$ -cwd

#$ -e uge.err

#$ -o uge.out

#$ -V

#$ -l h_node=1

#$ -l h_rt=0:30:0

 

. /etc/profile.d/modules.sh

module load cuda/8.0.44

module load ansys/R18.1 intel-mpi/17.3.196

 

# ANSYS settings.

INPUT=vm1.dat

ANSYS_CMD=ansys181

NCPUS=4

 

export base_dir=/home/4/t3-test00/isv/ansys

 

${ANSYS_CMD} -b \

-dis \

-mpi intelmpi \

-np ${NCPUS} \

-usessh \

-i ${INPUT} > ${INPUT}.`date '+%Y%m%d%H%M%S'`log 2>&1

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S ansyslmd -c 27001@lice0:27001@remote:27001@t3ldap1

 


 

5.2.         Fluent

It can be started up as follows.

 

[GUI]

$ module load ansys/R18.1

$ fluent

 

 

 

 [CLI]

$ module load ansys/R18.1

$ fluent -g

 

Type exit to exit.

 

You could run interactive use like in this example.

$ cd <work directory>

$fluent 3d -g -i fluentbench.jou

 

You could submit a batch job like in this example.

$ cd <work directory>

## in case, sample.sh

$ qsub sample.sh

 


 

The following is a sample job script: MPI parallel (f_node)

#!/bin/bash

#$ -cwd

#$ -e uge.err

#$ -o uge.out

#$ -V

#$ -l f_node=4

#$ -l h_rt=0:30:0

 

. /etc/profile.d/modules.sh

module load cuda/8.0.44

module load ansys/R18.1 intel-mpi/17.3.196

 

export base_dir=/home/4/t3-test00/sts/fluent

 

## FLUENT settings.

export exe=fluent

export LM_LICENSE_FILE=27001@lice0:27001@remote:27001@t3ldap1

export ANSYSLI_SERVERS=2325@lice0:2325@remote:2325@t3ldap1

 

DATE="`date '+%Y%m%d%H%M%S'`"

export INPUT=fluentbench.jou

 

cd ${base_dir}

 

$exe 3d-mpi=intel -cnf=${PE_HOSTFILE} -g -i ${INPUT} > ${INPUT}.${DATE}.log 2>&1

 


 

The following is a sample job script: MPI parallel (h_node)

#!/bin/bash

#$ -cwd

#$ -e uge.err

#$ -o uge.out

#$ -V

#$ -l h_node=1

#$ -l h_rt=0:30:0

 

. /etc/profile.d/modules.sh

module load cuda/8.0.44

module load ansys/R18.1 intel-mpi/17.3.196

 

export base_dir=/home/4/t3-test00/sts/fluent

 

## FLUENT settings.

export exe=fluent

export LM_LICENSE_FILE=27001@lice0:27001@remote:27001@t3ldap1

export ANSYSLI_SERVERS=2325@lice0:2325@remote:2325@t3ldap1

 

DATE="`date '+%Y%m%d%H%M%S'`"

export INPUT=fluentbench.jou

 

cd ${base_dir}

 

$exe 3d -ncheck-mpi=intel -cnf=${PE_HOSTFILE} -g -i ${INPUT} > ${INPUT}.${DATE}.log 2>&1

Since it is not possible to set across resources using f_node, set # $ - l {resource name} = 1 (for example, # $ - l h_node = 1 for h_node) and include the "- ncheck" option in the command.

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S ansyslmd -c 27001@lice0:27001@remote:27001@t3ldap1

 

 


 

5.3.         ABAQUS

You could run interactive use like in this example.

$ module load abaqus/2017

$ abaqus job=<input file> <option>

 

You could submit a batch job like in this example.

$ cd <work directory>

#### in case, sample.sh

$ qsub sample.sh

 

The following is a sample job script: MPI parallel

#!/bin/bash

#$ -cwd

#$ -V

#$ -l h_node=1

#$ -l h_rt=0:30:0

#$ -N ABAQUS-test

 

. /etc/profile.d/modules.sh

module load cuda/8.0.44

module load abaqus/2017 intel/17.0.4.196

 

export base_dir=/home/4/t3-test00/isv/abaqus/abaqus

 

## ABAQUS settings.

INPUT=s2a

ABAQUS_VER=2017

ABAQUS_CMD=abq${ABAQUS_VER}

SCRATCH=${base_dir}/scratch

NCPUS=2

 

cd ${base_dir}

 

/usr/bin/time ${ABAQUS_CMD} interactive \

job=${INPUT} \

cpus=${NCPUS} \

scratch=${SCRATCH} \

mp_mode=mpi > ${INPUT}.`date '+%Y%m%d%H%M%S'`log 2>&1

 

 

 


 

5.4.         ABAQUS CAE

It can be started up as follows.

 

$ module load abaqus/2017

$ abaqus cae

 

 

Click  File> Exit  on the menu bar to exit.

 


 

5.5.         Marc & Mentat / Dytran

5.5.1.Marc & Mentat / Dytran

For an overview of each product, please refer to the website of MSC Software Corporation.

Marc: http://www.mscsoftware.com/ja/product/marc

Dytran: http://www.mscsoftware.com/ja/product/dytran

 

5.5.2.Marc & Mentat / Dytran Documentations

Please refer to the following.

Marc & Mentat Docs mscsoftware.com

Dytran Docs mscsoftware.com

 

5.5.3.Marc

You could run interactive use like in this example.

 

$ cd <work directory>

$ module load intel intel-mpi cuda marc_mentat/2017

#### in case, sample file (e2x1.dat)

$ cp /apps/t3/sles12sp2/isv/msc/marc/marc2017/demo/ e2x1.dat ./

$ marc -jid e2x1

 

5.5.4.Mentat

It can be started up as follows.

                                                                                                                                                                                              

$ cd <work directory>

$ module load intel intel-mpi cuda marc_mentat/2017

$ mentat

Click  File> Exit  on the menu bar to exit.

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S MSC -c 27004@lice0:27004@remote:27004@t3ldap1

 

 

 


5.6.         Nastran

It can be started up as follows.

 

$ cd <work directory>

$ module load nastran/2017.1

## In case, sample file (um24.dat)

$ cp /apps/t3/sles12sp2/isv/msc/MSC_Nastran/20171/msc20171/nast/demo/um24.dat ./

$ nast20171 um24

 

You could submit a batch job like in this example.

$ cd <work directory>

## In case, sample (parallel.sh)

$ qsub parallel.sh

 

The following is a sample job script:

#!/bin/bash

#$ -cwd

#$ -N nastran_parallel_test_job

#$ -e uge.err

#$ -o uge.out

#$ -l h_node=1

#$ -l h_rt=0:10:00

#$ -V

 

export NSLOTS=4

echo Running on host `hostname`

echo "UGE job id: $JOB_ID"

echo Time is `date`

echo Directory is `pwd`

echo This job runs on the following processors:

echo This job has allocated $NSLOTS processors

 

. /etc/profile.d/modules.sh

module load cuda openmpi nastran/2017.1

 

mpirun -np $NSLOTS \

nast20171 parallel=$NSLOTS um24

 

/bin/rm -f $in restrt

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S MSC -c 27004@lice0:27004@remote:27004@t3ldap1

 

 


 

5.7.         Patran

It can be started up as follows.

                                                                                                                                                                                              

$ cd <work directory>

$ module load patran/2017.0.2

$ pat2017

 

 

Click  File> Exit  on the menu bar to exit.

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S MSC -c 27004@lice0:27004@remote:27004@t3ldap1

 


 

5.8.         Gaussian

It can be started up as follows.

 

You could run interactive use like in this example.

$ module load gaussian16/A03

$ g16 <input file>

 

Using Linda

$ module load gaussian16_linda/A03

$ g16 <input file>

 

You could submit a batch job like in this example.

$ qsub <input script>

 

The following is a set of sample scripts for calculating the geometory optimization and vibration analysis (IR + Raman intensity) of glycine:

 

glycine.sh

#!/bin/bash

#$ -cwd

#$ -N Gaussian_sample_job

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -V

 

## The following is not mandatory. The name specified by -N is written in ".o <JOBID>" file.

echo Running on host `hostname`

echo "UGE job id: $JOB_ID"

echo Time is `date`

echo Directory is `pwd`

echo This job runs on the following processors:

echo This job has allocated $NSLOTS processors

 

## The following are mandatory.

. /etc/profile.d/modules.sh

module load gaussian16/A03

 

g16 glycinetest.gjf

 

glycine.gjf

%rwf=glycinetest.rwf

%NoSave

%chk=glycinetest.chk

%nprocshared=28

%mem=120GB

#P opt=(calcfc,tight,rfo) freq=(raman)

 

glycine Test Job

 

0 2

 N                0   -2.15739574   -1.69517043   -0.01896033 H

 H                0   -1.15783574   -1.72483643   -0.01896033 H

 C                0   -2.84434974   -0.41935843   -0.01896033 H

 C                0   -1.83982674    0.72406557   -0.01896033 H

 H                0   -3.46918274   -0.34255543   -0.90878333 H

 H                0   -3.46918274   -0.34255543    0.87086267 H

 O                0   -0.63259574    0.49377357   -0.01896033 H

 O                0   -2.22368674    1.89158057   -0.01896033 H

 H                0   -2.68286796   -2.54598119   -0.01896033 H

 

 1 2 1.0 3 1.0 9 1.0

 2

 3 4 1.0 5 1.0 6 1.0

 4 7 1.5 8 1.5

 5

 6

 7

 8

 9

 

You can calculate by placing the above glycine.sh and glycine.gjf on the same directory and executing the following command. After calculation, glycinetest.log, glycinetest.chk will be generated.

See “5.9 GaussView” for verifying the analysis result.

 


 

5.9.         GaussView

It can be started up as follows.

 

$ module load gaussian16/A03 gaussview/6

$ gview.exe

 

 

Click  File> Exit  on the menu bar to exit.

 

Example: glycine.log

$ module load gaussian16/A03 gaussview/6

$ gview.exe glycine.log

 

The result of analysis can be confirmed from [Result]. You can check calculation overview, charge information and vibration analysis from [Summary], [Charge Distribution] and [Vibration], respectively. Since vibration analysis was performed in this example, the state of vibration can be confirmed from the [Start Animation] in the Vibration dialog.


 

5.10.    AMBER

(1) You could run interactive use like in this example: CPU serial

$ cd <work directory>

$ module load amber/16

$ sander [-O|A] -i mdin -o mdout -p prmtop -c inpcrd -r restrt

 

(2) You could run interactive use like in this example: CPU parallel (sander.MPI)

$ cd <work directory>

$ module load amber/16

$ mpirun -np -[Number of processes] sander.MPI [-O|A] -i mdin -o mdout -p prmtop -c inpcrd -r restrt

 

(3) You could run interactive use like in this example: GPU serial (pmemd.cuda)

$ cd <work directory>

$ module load amber/16_cuda

$ pmemd.cuda [-O] -i mdin -o mdout -p prmtop -c inpcrd -r restrt

 

(4) You could run interactive use like in this example: GPU parallel (pmemd.cuda.MPI)

$ cd <work directory>

$ module load amber/16_cuda

$ mpirun -np -[Number of processes] pmemd.cuda.MPI [-O] -i mdin -o mdout -p prmtop -c inpcrd -r restrt

 

(5) You could submit a batch job like in this example.

$ cd <work directory>

## in case, parallel.sh

$ qsub parallel.sh

 

The following is a sample job script: CPU parallel

#!/bin/bash

#$ -cwd

#$ -N amber_parallel_test_job

#$ -e uge.err

#$ -o uge.out

#$ -l h_node=2

#$ -l h_rt=6:00:00

#$ -V

export NSLOTS=8

echo Running on host `hostname`

echo "UGE job id: $JOB_ID"

echo Time is `date`

echo Directory is `pwd`

echo This job runs on the following processors:

echo This job has allocated $NSLOTS processors

 

in=./mdin

out=./mdout_para

inpcrd=./inpcrd

top=./top

 

cat <<eof > $in

 Relaxtion of trip cage using

&cntrl                                                                        

  imin=1,maxcyc=5000,irest=0, ntx=1,

  nstlim=10, dt=0.001,

  ntc=1, ntf=1, ioutfm=1

  ntt=9, tautp=0.5,

  tempi=298.0, temp0=298.0,

  ntpr=1, ntwx=20,

  ntb=0, igb=8,

  nkija=3, gamma_ln=0.01,

  cut=999.0,rgbmax=999.0,

  idistr=0

 /

eof

 

. /etc/profile.d/modules.sh

module load amber/16

 

mpirun -np $NSLOTS \

sander.MPI -O -i $in -c $inpcrd -p $top -o $out < /dev/null

 

/bin/rm -f $in restrt

 

The following is a sample job script: GPU parallel

#!/bin/bash

#$ -cwd

#$ -N amber_cuda_parallel_test_job

#$ -e uge.err

#$ -o uge.out

#$ -l h_node=2

#$ -l h_rt=0:30:0

#$ -V

 

export NSLOTS=8

echo Running on host `hostname`

echo "UGE job id: $JOB_ID"

echo Time is `date`

echo Directory is `pwd`

echo This job runs on the following GPUs:

echo This job has allocated $NSLOTS GPUs

 

in=./mdin

out=./mdout

inpcrd=./inpcrd

top=./top

 

cat <<eof > $in

FIX (active) full dynamics ( constraint dynamics: constant volume)

&cntrl

   ntx = 7,       irest = 1,

   ntpr = 100,     ntwx = 0,     ntwr = 0,

   ntf = 2,       ntc = 2,       tol = 0.000001,

   cut = 8.0,

   nstlim = 500,  dt = 0.00150,

   nscm = 250,

   ntt = 0,

   lastist = 4000000,

   lastrst = 6000000,

 /

eof

 

. /etc/profile.d/modules.sh

module load amber/16_cuda

 

mpirun -np $NSLOTS \

pmemd.cuda.MPI -O -i $in -c $inpcrd -p $top -o $out < /dev/null

 

/bin/rm -f $in restrt

 


 

5.11.    Materials Studio

5.11.1.License connection setting

Execute  All Programs > BIOVIA > Licensing > License Administrator 7.6.14  from the Windows [Start menu] with system administrator privileges.

 

自動生成された代替テキスト: 囲BIOVIAL.censeAdmin.strator
日Ie少IP
'LICenseAdministrator
胆を1正囲団圏正並m血な豆」日日
'LlcenseFlle
Administration
bstallLicense
降questLicense
InstallTemporaryLicense
'LicenseServer
GonneCtionS
Administration
UsageReport
』Diagnostics
LicenseTest
DiagnostlcsReport
offlineACCeSS
口1回
onhgurationSummary
LicensePackVとrsion:7方14.149
LicensePackLocation:C:¥ProgramFiles(x86)¥BIOVIA¥LicensePack¥
LicenseFIIeDirectory:C:¥ProgramFiles(x86)¥BIOVIA¥LicensePack¥Licenses
LicenseFIIebstalled:No
TemporaryLicenseFileIhstalled:No
ServerConnections:Notapplicable
ServerLicensePackVersion:Notapplicable
ServerStatus:Notapplicable
[Server少ta'‘』【He'p』

 

Click [Connections] -[Set] , and open "Set License Server" dialog.

自動生成された代替テキスト: 囲BIOVIALlcense^dmin.strator
口1回
EIIe少IP
'LICenseAdministrator
GonfigurationSummary
'LICenseFile
Administration
bstallLicense
降questLicense
InstallTemporaryLicense
'LicenseServer
国並恋拭内拍日日日日日日日『】
Administration
UsageReport
』Diagnostics
LicenseTest
DiagnostlcsReport
offlineACCeSS
LIC6nseSeFVeFCOnneCtionS
ServerStatus:
Notapplicable
【郵t.→)1少move』[Server少ta'‘』【He'p』

 

Select Redundant Server and type each host name and a port number.

自動生成された代替テキスト: SetLICenseseFVe円
Hostname:liceo
Hastname:remote
−…
HOStname
PO『tl
t3ldap1
27005
図少dundantservers
1OK)【Gance‘】1He'p]

 

If server status is displayed as "Connected", setting is completed.

(note) You need to establish a connection with two or more license servers.

 

5.11.2.License Usage Status

(1) On Windows

Execute  All Programs > BIOVIA > Licensing > License Administrator 7.6.14 > Utilities (FLEXlm LMTOOLs)  from the Windows [Start menu] .

 

Open [Service/License File] tab and slect [Configulation using License File] .

Make sure that MSI_LICENSE_FILE is displayed.

 

自動生成された代替テキスト: LMTOOLSbyFlexeraSoftwareLLC
亡コld旦
FileEditMode
数蝉タ些蝉手万鷺1
Help
SWtemSettings
Utilities
Stop/Reread
sw'tchRePortLo.1Ser児rs'a'us1Ser臓rDia.51Borrow'n.
NormallytheIlcense栃le15alreadyspeci栃ed.To0鵬rrideitsetithere
LICengeFile
P「
9oon価じurationusinじLicenseFIIe
rCon栃gurationusingSer功ces
1Browse1
l
「LMT00LS1じnoreslicenseflePathen功ronment粕riables
lrU

 

 

Open [Server Status] tab, click [Perform Status Enqurity] and you can see usage status of license.

If you want to display only specific licenses, enter the license name that you want to display in [Individual Feature] and execute [Perform Status Enqurity].

 

(2) On login node

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S msi -c 27005@lice0,27005@remote,27005@t3ldap1

 

 

5.11.3.Start up Materials Studio

Click  BIOVIA > Materials Studio 2017 R2  from the Windows [Start menu] .


 

5.12.    Discovery Studio

5.12.1.License connection setting

Execute  All Programs > BIOVIA > Licensing > License Administrator 7.6.14  from the Windows [Start menu] with system administrator privileges.

 

自動生成された代替テキスト: 囲BIOVIAL.censeAdmin.strator
日Ie少IP
'LICenseAdministrator
胆を1正囲団圏正並m血な豆」日日
'LlcenseFlle
Administration
bstallLicense
降questLicense
InstallTemporaryLicense
'LicenseServer
GonneCtionS
Administration
UsageReport
』Diagnostics
LicenseTest
DiagnostlcsReport
offlineACCeSS
口1回
onhgurationSummary
LicensePackVとrsion:7方14.149
LicensePackLocation:C:¥ProgramFiles(x86)¥BIOVIA¥LicensePack¥
LicenseFIIeDirectory:C:¥ProgramFiles(x86)¥BIOVIA¥LicensePack¥Licenses
LicenseFIIebstalled:No
TemporaryLicenseFileIhstalled:No
ServerConnections:Notapplicable
ServerLicensePackVersion:Notapplicable
ServerStatus:Notapplicable
[Server少ta'‘』【He'p』

 

Click [Connections] -[Set] , and open "Set License Server" dialog.

自動生成された代替テキスト: 囲BIOVIALlcense^dmin.strator
口1回
EIIe少IP
'LICenseAdministrator
GonfigurationSummary
'LICenseFile
Administration
bstallLicense
降questLicense
InstallTemporaryLicense
'LicenseServer
国並恋拭内拍日日日日日日日『】
Administration
UsageReport
』Diagnostics
LicenseTest
DiagnostlcsReport
offlineACCeSS
LIC6nseSeFVeFCOnneCtionS
ServerStatus:
Notapplicable
【郵t.→)1少move』[Server少ta'‘』【He'p』

 

Select Redundant Server and type each host name and a port number.

自動生成された代替テキスト: SetLICenseseFVe円
Hostname:liceo
Hastname:remote
−…
HOStname
PO『tl
t3ldap1
27005
図少dundantservers
1OK)【Gance‘】1He'p]

 

If server status is displayed as "Connected", setting is completed.

(note) You need to establish a connection with two or more license servers.

 

5.12.2.License Usage Status

(1) On Windows

Execute  All Programs > BIOVIA > Licensing > License Administrator 7.6.14 > Utilities (FLEXlm LMTOOLs)  from the Windows [Start menu] .

 

Open [Service/License File] tab and slect [Configulation using License File] .

Make sure that MSI_LICENSE_FILE is displayed.

 

自動生成された代替テキスト: LMTOOLSbyFlexeraSoftwareLLC
亡コld旦
FileEditMode
数蝉タ些蝉手万鷺1
Help
SWtemSettings
Utilities
Stop/Reread
sw'tchRePortLo.1Ser児rs'a'us1Ser臓rDia.51Borrow'n.
NormallytheIlcense栃le15alreadyspeci栃ed.To0鵬rrideitsetithere
LICengeFile
P「
9oon価じurationusinじLicenseFIIe
rCon栃gurationusingSer功ces
1Browse1
l
「LMT00LS1じnoreslicenseflePathen功ronment粕riables
lrU

 

 

Open [Server Status] tab, click [Perform Status Enqurity] and you can see usage status of license.

If you want to display only specific licenses, enter the license name that you want to display in [Individual Feature] and execute [Perform Status Enqurity].

 

 

(2) On login node

When you execute the following command, usage status is displayed.

$ lmutil lmstat -S msi -c 27005@lice0,27005@remote,27005@t3ldap1

 

 

5.12.3.Start up Discovery Studio

Click  BIOVIA > Discovery Studio 2017 R2 64-bit Client  from the Windows [Start menu] .

 

5.12.4.User authentication

A user authentication dialog will be displayed at startup. Please login with TSUBAME 3 login name and password. If you don’t set the password, please set it from the portal.

 


 

5.13.    Mathematica

It can be started up as follows.

[CLI]

$ cd <work directory>

$ module load mathematica/11.1.1

$ math

Mathematica 11.1.1 Kernel for Linux x86 (64-bit)

Copyright 1988-2017 Wolfram Research, Inc.

 

In[1]:=

 

Type Quit to exit.

 

[GUI]

$ module load mathematica/11.1.1

$ Mathematica

 

 

To exit the Wolfram System, you typically choose the "Exit" menu item in the notebook interface.


 

5.14.    Maple

It can be started up as follows.

[CLI]

$ module load maple/2016.2

$ maple

    |\^/|     Maple 2016 (X86 64 LINUX)

._|\|   |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple Inc. 2016

 \  MAPLE  /  All rights reserved. Maple is a trademark of

<____ ____>  Waterloo Maple Inc.

      |       Type ? for help.

 

Type Quit to exit.

 

 [GUI]

$ module load maple/2016.2

$ xmaple

 

 

Click  File> Exit  on the menu bar to exit.

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S maplelmg -c 27007@lice0:27007@remote:27007@t3ldap1

 

 


 

5.15.    AVS/Express

It can be started up as follows.

 

$ module load avs/8.4

$ xp

 

 

Click  File> Exit  on the menu bar to exit.


5.16.    AVS/Express PCE

It can be started up as follows.

 

$ module load avs/8.4

$ para_start

 

 

Click  File> Exit  on the menu bar to exit.

 

By accessing the following URL in the web browser, you can check the license Usage Status.

http://lice0:33333/STATUS

 

 


 

5.17.    LS-DYNA

5.17.1.Overview LS-DYNA

LS-DYNA is a general-purpose finite element program capable of simulating complex real world problems. It is used by the automobile, aerospace, construction, military, manufacturing, and bioengineering industries.

 

5.17.2.LS-DYNA

You could submit a batch job like in this example:

 

[SMP in sigle precision]

$ cd <work directory>

## In case, run_smps_r9.1.0.sh

$ qsub run_smps_r9.1.0.sh

 

 [SMP in double precision]

$ cd <work directory>

## In case, run_smpd_r9.1.0.sh

$ qsub run_smpd_r9.1.0.sh

 

[MPP in single precision]

$ cd <work directory>

## In case, run_mpps_r9.1.0-1node-avx2.sh

$ qsub run_mpps_r9.1.0-1node-avx2.sh

 

 [MPP in double precision]

$ cd <work directory>

## In case, run_mppd_r9.1.0-1node-avx2.sh

$ qsub run_mppd_r9.1.0-1node-avx2.sh

 

The following is a sample job script:

 

[SMP in sigle precision]

#!/bin/bash

#$ -cwd

#$ -V

#$ -l h_node=1

#$ -l h_rt=0:30:0

 

. /etc/profile.d/modules.sh

module load cuda/8.0.44

module load lsdyna/R9.1.0

 

export base_dir=/home/4/t3-test00/isv/lsdyna

cd $base_dir/smp_s

 

export exe=smpdynas

 

#export LSTC_LICENSE=network

#export LSTC_MEMORY=auto

 

export NCPUS=4

export OMP_NUM_THREADS=${NCPUS}

export INPUT=$base_dir/sample/airbag_deploy.k

 

${exe} i=${INPUT} ncpus=${NCPUS}

 

 [SMP in double precision]

#!/bin/bash

#$ -cwd

#$ -V

#$ -l h_node=1

#$ -l h_rt=0:30:0

 

. /etc/profile.d/modules.sh

module load cuda/8.0.44

module load lsdyna/R9.1.0

 

export base_dir=/home/4/t3-test00/isv/lsdyna

cd $base_dir/smp_d

 

export exe=smpdynad

 

#export LSTC_LICENSE=network

#export LSTC_MEMORY=auto

 

export NCPUS=4

export OMP_NUM_THREADS=${NCPUS}

export INPUT=$base_dir/sample/airbag_deploy.k

 

${exe} i=${INPUT} ncpus=${NCPUS}

 

 [MMP in sigle precision]

#!/bin/bash

#$ -cwd

#$ -V

#$ -l h_node=1

#$ -l h_rt=0:30:0

 

. /etc/profile.d/modules.sh

module load cuda/8.0.44

module load lsdyna/R9.1.0 mpt/2.16

 

export base_dir=/home/4/t3-test00/isv/lsdyna

cd $base_dir/mpp_s

 

export exe=mppdynas_avx2

export dbo=l2as_avx2

 

#export LSTC_LICENSE=network

#export LSTC_MEMORY=auto

 

export NCPUS=4

export OMP_NUM_THREADS=1

export INPUT=$base_dir/sample/airbag_deploy.k

 

export MPI_BUFS_PER_PROC=512

export MPI_REMSH=ssh

 

mpiexec_mpt -v -np 4 dplace -s1 ${exe} i=${INPUT} ncpus=${NCPUS}

${dbo} binout*

 

 [MPP in doble precision]

 

#!/bin/bash

#$ -cwd

#$ -V

#$ -l h_node=1

#$ -l h_rt=0:30:0

 

. /etc/profile.d/modules.sh

module load cuda/8.0.44

module load lsdyna/R9.1.0 mpt/2.16

 

export base_dir=/home/4/t3-test00/isv/lsdyna

cd $base_dir/mpp_d

 

export exe=mppdynad_avx2

export dbo=l2ad_avx2

 

#export LSTC_LICENSE=network

#export LSTC_MEMORY=auto

 

export NCPUS=4

export OMP_NUM_THREADS=1

export INPUT=$base_dir/sample/airbag_deploy.k

 

export MPI_BUFS_PER_PROC=512

export MPI_REMSH=ssh

 

mpiexec_mpt -v -np 4 dplace -s1 ${exe} i=${INPUT} ncpus=${NCPUS}

${dbo} binout*

 

Please change the script according to the user's environment. Input file is specified as INPUT = <input file> in the shell script.

 

When you execute the following command, license Usage Status is displayed.

$ lstc_qrun

 

 


 

5.18.    LS-PrePost

5.18.1.Overview LS-PrePost

LS-PrePost is an advanced pre and post-processor that is delivered free with LS-DYNA. The user interface is designed to be both efficient and intuitive. LS-PrePost runs on Windows, Linux, and Unix utilizing OpenGL graphics to achieve fast rendering and XY plotting.

 

5.18.2.LS-PrePost

It can be started up as follows.

 

$ cd <work directory>

$ module load lsprepost/4.3

$ lsprepost

 

 _____________________________________________________

 |                                                   |

 |     Livermore Software Technology Corporation          |

 |                                                   |

 |                L S - P R E P O S T                     |

 |                                                   |

 |    Advanced Pre- and Post-Processor for LS-DYNA         |

 |                                                   |

 |         LS-PrePost(R) V4.3.11 - 04Jul2017               |

 |                                                   |

 |            LSTC Copyright (C) 1999-2014               |

 |                All Rights Reserved                    |

 |___________________________________________________|

 

 OpenGL version 3.0 Mesa 11.2.1

 

Click  File> Exit  on the menu bar to exit.

 


 

5.19.    COMSOL

It can be started up as follows.

$ module load comsol/53

$ comsol

 

 

Click  File> Exit  on the menu bar to exit.

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S LMCOMSOL -c 27009@lice0:27009@remote:27009@t3ldap1

 


 

5.20.    Schrodinger

It can be started up as follows.

[CLI]

$ module load schrodinger/Feb-17

$ ligprep -ismi <input file> -omae <output file>

 

[GUI]

$ module load schrodinger/Feb-17

$ maestro

 

 

Click  File> Exit  on the menu bar to exit.

 

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat -S SCHROD -c 27010@lice0:27010@remote:27010@t3ldap1

 


 

5.21.    MATLAB

It can be started up as follows.

[GUI]

$ module load matlab/R2017a

$ matlab

 

 

 [CLI]

$ module load matlab/R2017a

$ matlab -nodisplay

Type  Exit  to exit.

 

When you execute the following command, license Usage Status is displayed.

$ lmutil lmstat-S MLM -c 27014@lice0:27014@remote:27014@t3ldap1

 

 

5.22.    Allinea Forge

It can be started up as follows.

$ module load allinea/7.0.5

$ forge

 

 

Click  File> Exit  on the menu bar to exit.

 

6. Freeware

The list of the main freeware is shown below.

 

Software name

Description

GAMESS

Computational chemistry Software

Tinker

Computational chemistry Software

GROMACS

Computational chemistry Software

LAMMPS

Computational chemistry Software

NAMMD

Computational chemistry Software

CP2K

Computational chemistry Software

OpenFOAM

Computational Software

CuDNN

GPU library

NCCL

GPU library

Caffe

DeepLearning Framework

Chainer

DeepLearning Framework

TensorFlow

DeepLearning Framework

R

statistics Interpreter

Apache Hadoop

Distributed data processing tool

POV-Ray

Visualization software

ParaView

Visualization software

VisIt

Visualization software

Gimp

Graphics tool

Gnuplot

Graphics tool

Tgif

Graphics tool

ImageMagick

Graphics tool

TeX Live

TeX distribution

java SDK

Development environment

PETSc

Scientific Computation Library

Fftw

FFT library

 


 

6.1.         Computational chemistry Software

6.1.1.GAMESS

The following is a sample job script.

 

#!/bin/bash

#$ -cwd

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -N gamess

. /etc/profile.d/modules.sh

 

module load intel intel-mpi gamess

cat $PE_HOSTFILE | awk '{print $1}' > $TMPDIR/machines

cd $GAMESS_DIR

./rungms exam08 mpi 4 4

 

Refer to the site shown below.

http://www.msg.ameslab.gov/gamess/index.html

 

6.1.2.Tinker

The following is a sample job script.

 

#!/bin/bash

#$ -cwd

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -N tinker

. /etc/profile.d/modules.sh

 

module load intel tinker

cp -rp $TINKER_DIR/example $TMPDIR

cd $TMPDIR/example

dynamic waterbox.xyz -k waterbox.key 100 1 1 2 300

cp -rp $TMPDIR/example $HOME

Refer to the site shown below.

https://dasher.wustl.edu/tinker/

 

6.1.3.GROMACS

The following is a sample job script.

 

#!/bin/bash

#$ -cwd

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -N gromacs

. /etc/profile.d/modules.sh

 

module load cuda intel-mpi gromacs

cp -rp water_GMX50_bare.tar.gz $TMPDIR

cd $TMPDIR

tar xf water_GMX50_bare.tar.gz

cd water-cut1.0_GMX50_bare/3072

gmx_mpi grompp -f pme.mdp

OMP_NUM_THREADS=2 mpirun -np 4 gmx_mpi mdrun

cp -pr $TMPDIR/water-cut1.0_GMX50_bare $HOME

 

Refer to the site shown below.

http://www.gromacs.org/


 

6.1.4.LAMMPS

The following is a sample job script.

 

#!/bin/bash

#$ -cwd

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -N lammps

. /etc/profile.d/modules.sh

 

module load intel cuda openmpi lammps

cp -rp $LAMMPS_DIR/bench $TMPDIR

cd $TMPDIR/bench

mpirun -np 4 lmp_gpu -sf gpu -in in.lj -pk gpu 4

cp -rp $TMPDIR/bench $HOME

 

Refer to the site shown below.

http://lammps.sandia.gov/doc/Section_intro.html

 

6.1.5.NAMD

The following is a sample job script.

 

#!/bin/bash

#$ -cwd

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -N namd

. /etc/profile.d/modules.sh

 

module load cuda intel namd

cp -rp $NAMD_DIR/examples/stmv.tar.gz $TMPDIR

cd $TMPDIR

tar xf stmv.tar.gz

cd stmv

namd2 +idlepoll +p4 +devices 0,1 stmv.namd

cp -rp $TMPDIR/stmv $HOME

 

Refer to the site shown below.

http://www.ks.uiuc.edu/Research/namd/2.12/ug/

 

6.1.6.CP2K

The following is a sample job script.

 

#$ -cwd

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -N cp2k

. /etc/profile.d/modules.sh

 

module load intel intel-mpi cp2k

cp -rp $CP2K_DIR/tests/QS/benchmark $TMPDIR

cd $TMPDIR/benchmark

mpirun -np 8 cp2k.popt -i H2O-32.inp -o H2O-32.out

cp -rp $TMPDIR/benchmark $HOME

 

Refer to the site shown below.

https://www.cp2k.org/

 


 

6.2.         CFD software

6.2.1.OpenFOAM

The following is a sample job script.

 

#!/bin/bash

#$ -cwd

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -N openform

. /etc/profile.d/modules.sh

 

module load cuda openmpi openfoam

mkdir -p $TMPDIR/$FOAM_RUN

cd $TMPDIR/$FOAM_RUN

cp -rp $FOAM_TUTORIALS .

cd tutorials/incompressible/icoFoam/cavity/cavity

blockMesh

icoFOAM

paraFOAM

 

Refer to the site shown below.

http://www.openfoam.com/documentation/


 

6.3.         Machine learning, big data analysis software

6.3.1.CuDNN

Three versions (5.1, 6.0, 7.0) are installed. It can be loaded as follows.

 

$ module load cudacudnn

 

6.3.2.NCCL

It can be loaded as follows.

 

$ module load cudanccl

 

6.3.3.Caffe

You could run interactive use like in this example.

 

$ module load python-extension

$ mkdir test

$ cp -rp $CAFFE_DIR/examples $CAFFE_DIR/data test

$ cd test

$ ./data/mnist/get_mnist.sh

$ vi examples/mnist/create_mnist.sh   <-- Change to "BUILD=$CAFFE_DIR/bin"

$ ./examples/mnist/create_mnist.sh

$ vi examples/mnist/train_lenet.sh    <-- Change "./build/tools/caffe" to "caffe"

$ ./examples/mnist/train_lenet.sh

 

Refer to the site shown below.

http://caffe.berkeleyvision.org/

 

6.3.4.Chainer

You could run interactive use like in this example.

 

$ module load python-extension

$ cp -rp $PYTHON_EXTENSION_DIR/examples/chainer/examples .

$ cd examples/mnist

$ ./train_mnist_model_parallel.py

 

Refer to the site shown below.

https://docs.chainer.org/en/stable/

 

6.3.5.TensorFlow

You could run interactive use like in this example.

#python2.7

$ module load python-extension

$ cp -rp $PYTHON_EXTENSION_DIR/examples/tensorflow/examples .

$ cd examples/tutorials/mnist

$ python mnist_deep.py

 

#python3.4

$ module load python-extension/3.4

$ cp -rp $PYTHON_EXTENSION_DIR/examples/tensorflow/examples .

$ cd examples/tutorials/mnist

$ python mnist_deep.py

 

Refer to the site shown below.

https://www.tensorflow.org/

 

6.3.6.R

Rmpi for parallel processing and rpud for GPU are installed.

You could run interactive use like in this example.

 

$ module load intel cudaopenmpir

$ mpirun -stdin all -np 2 R --slave --vanilla < test.R

 

6.3.7.Apache Hadoop

You could run interactive use like in this example.

$ module load jdkhadoop

$ mkdir input

$ cp -p $HADOOP_HOME/etc/hadoop/*.xml input

$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep input output 'dfs[a-z.]+'

$  cat output/part-r-00000

1       dfsadmin

 

You could submit a batch job. The following is a sample job script.

#!/bin/bash

#$ -cwd

#$ -l f_node=1

#$ -l h_rt=0:10:0

#$ -N hadoop

. /etc/profile.d/modules.sh

 

module load jdk hadoop

cd $TMPDIR

mkdir input

cp -p $HADOOP_HOME/etc/hadoop/*.xml input

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep input output 'dfs[a-z.]+'

cp -rp output $HOME

 


 

6.4.         Visualization software

6.4.1.POV-Ray

It can be started up as follows.

 

$ module load pov-ray

$ povray -benchmark

 

Refer to the site shown below.

 

http://www.povray.org/

 

6.4.2.ParaView

It can be started up as follows.

 

$ module load cuda openmpi paraview

$ paraview

 

Refer to the site shown below.

https://www.paraview.org/

 

6.4.3.VisIt

It can be started up as follows.

 

$ module load cuda openmpi vtk visit

$ visit

 

Refer to the site shown below.

https://wci.llnl.gov/simulation/computer-codes/visit/

 


 

6.5.         Other freeware

6.5.1.gimp

It can be started up as follows.

 

$ module load gimp

$ gimp

 

6.5.2.gnuplot

In addition to standard configure option, it is built to correspond to X11, latex, PDFlib-lite, Qt4.

It can be started up as follows.

 

$ module load gnuplot

$ gnuplot

 

6.5.3.tgif

It can be started up as follows.

 

$ module load tgif

$ tgif

 

(note)  Cannot open the Default (Msg) Font '-*-courier-medium-r-normal-*-14-*-*-*-*-*-iso8859-1'.
If the above error occurs and it does not start up, add the following line to ~ / .Xdefaults.

Tgif.DefFixedWidthFont:             -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*

Tgif.DefFixedWidthRulerFont:        -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*

Tgif.MenuFont:                    -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*

Tgif.BoldMsgFont:                 -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*

Tgif.MsgFont:                     -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*

 

6.5.4.ImageMagick

In addition to standard configure options, it is built to correspond to X11, HDRI, libwmf, jpeg.

It can be started up as follows.

 

$ module load imagemagick

$ convert -size 48x1024 -colorspace RGB 'gradient:#000000-#ffffff' -rotate 90 -gamma 0.5 -gamma 2.0 result.jpg

 

6.5.5.pLaTeX2e

It can be started up as follows.

 

$ module load texlive

$ platex test.tex

$ dvipdfmx test.dvi

 

(note) Please use dvipdfmx to create pdf. Dvipdf does not convert Japanese properly.

 

6.5.6.Java SDK

It can be started up as follows.

 

$ module load jdk

$ javac Test.java

$ java Test

 

6.5.7.PETSc

Two types, real number and complex number are installed. It can be started up as follows.

 

$ module load intel intel-mpi

$ module load petsc/3.7.6/real           <-- real number

 OR

$ module load petsc/3.7.6/complex       <-- complex number

$ mpiifort test.F -lpetsc

 

6.5.8.fftw

Two types of version 2.1.5 and 3.3.6 are installed. It can be started up as follows.

 

$ module load intel intel-mpi fftw         <-- in case, Intel MPI

OR

$ module load intel cuda openmpi fftw     <-- in case, Open MPI

$ ifort test.f90 -lfftw3

 


 

Revision History

Revision

Date

Change

rev1

8/23/2017

first

rev2

9/6/2017

Second

rev3

9/11/2017

Add login node memory limit to “2.2 Login”

Add note CIFS volume display to “2.4 Storage service(CIFS)”

Add usage method to "4.4.4 Shared scratch area"

rev4

9/14/2017

Add note CIFS volume display to “2.4 Storage service(CIFS)”

rev5

9/28/2017

Add “4.3.1 x-forwarding”