Vasp Installation in Linux Clusters
- Pradeep
- Nov 29, 2015
- 2 min read
An installation of vasp in a massively parallel linux machine (and/or cluster) is a bit tricky. If the MKL and OpenMPi library files are properly installed in the root directory such as /opt, and are addressed correctly in the vasp's makefile, then the code can be successfully installed.
Step - 1
To untar the two files that you received though your license agreement with the VASP team, you may want to type:
$ tar -zxvf vasp.5.lib.tar.gz
$ tar -zxvf vasp.5.3.2.tar.gz
Step - 2
$ cd vasp.5.lib/
$ cp makefile.linux_gfortran makefile (you can choose the make file depending on the availability of your fortran compiler)
$ vi makefile
Now, change ifc to ifort using insert mode. Then go to Esc mode, and type :wq to save the makefile. Then type:
$ make
In this way you can install the library files related to the vasp package successfully. (Do not worry about the warning messages).
Step - 3
Your next job is to change the directory to vasp.5.3.
$ cd vasp.5.3
$ cp makefile.linux_gfortran makefile
$ vi makefile
(Edit this file by defining the PATHs to LAPACK, SCALAPACK, FFTW, FFT3D, and Openmpi, etc., and save the file. Make sure you will not forget to comment out # in case if you are going to compile the parallel version, that is, make it like, FC=mpif90)
$ make
Step - 4
To run your first vasp job, you may define the path to your vasp executable, which must be in the .bashrc file, and use a script file similar to the following:
#!/bin/bash #PBS -q short #PBS -l walltime=072:00:00,nodes=1:ppn=8 #PBS -V #PBS -N test #PBS -e test.pbserr #PBS -o test.pbslog
cd $PBS_O_WORKDIR
# Create scratch directories on each ccalculation nodes mkdir /scratch/$USER/work$PBS_JOBID cd /scratch/$USER/work$PBS_JOBID cp -r $PBS_O_WORKDIR/* . for i in `cat $PBS_NODEFILE` do if [ `hostname` != $i ]; then rcp -r /scratch/$USER/work$PBS_JOBID $i:/scratch/$USER fi done
# Return node information before running job echo job $PBS_JOBID is submitted on node `hostname` > PBS_SUBMIT_NODE cat $PBS_NODEFILE > PBS_NODEFILE cp PBS_SUBMIT_NODE $PBS_O_WORKDIR cp PBS_NODEFILE $PBS_O_WORKDIR
echo working directory is $PBS_O_WORKDIR echo job ID is $PBS_JOBID
# Run vasp NPROCS=`wc -l < $PBS_NODEFILE` /home/openmpi-1.6.5/bin/mpirun -machinefile $PBS_NODEFILE -np $NPROCS vasp > vasp.log # Delete scratch directories on each ccalculation nodes cp -r /scratch/$USER/work$PBS_JOBID/* $PBS_O_WORKDIR cd ../ for i in `cat $PBS_NODEFILE` do ssh $i rm -rf /scratch/$USER/work$PBS_JOBID done
Make sure you will keep all the required files in the same directory. That is, the files, sush as INCAR, KPOINTS, POSCAR, POTCAR, and vasp-run-script.sh should be in the directory. You will then type:
$ qsub vasp-run-script.sh (Fun with vasp)
Addtional help tips and recommendations in connection with various other issues of the code can be found here: => Click me & Me
(ii) LDOS_and_PDOS
(iii) Materials Project
(iv) Helpful tutorial - 1
(vii) Helpful tutorial - 3
(viii) VASP special tutorials
(ix) LMTO-ELF
Recent Posts
See AllI assume you are having an account in a linux/unix based cluster workstation/supercomputer. All you need to do is to follow-up the...
A list of commands for general use in linux can be found from the following link: http://www.tecmint.com/how-to-check-disk-space-in-linux...
Node info pbsnodes -a # show status of all nodes pbsnodes -a <node> # show status of specified node pbsnodes -l # list inactive nodes...