Many answers to questions can be found in the Bio-X Cluster FAQ.
qsub "script name"
qsub -I
In practice, I use "qsub -I" when I am compiling, and other than that rarely if ever type in the qsub command directly: it is contained in scripts.
Here are the shell scripts that I use. I recommend that you make copies of each of these:
$HOME/util/myqsub.sh
#!/bin/bash
_SUB=$HOME/util/_qsub.sh
ARGVPL=$HOME/util/argv.pl
PWD=`pwd`
ARGS=`$ARGVPL --pack $*`
qsub -o $HOME/tmp -e $HOME/tmp -v JOBDIR=$PWD,ARGS=$ARGS $_SUB
#!/bin/bash
#PBS -l ncpus=1,walltime=100:00:00
#variables using the -v option for qsub
#JOBDIR the directory where all the files are
ARGVPL=$HOME/util/argv.pl
ARGS=`$ARGVPL --unpack $ARGS`
if [ -z $JOBDIR ] ; then
echo "$PBS_JOBID: No JOBDIR"
exit 1;
fi
cd $JOBDIR
$ARGS
#!/usr/bin/perl -w
unless ($ARGV[0]) {
exit;
}
$mode = shift(@ARGV);
if ($mode eq "--pack") {
print $ARGV[0];
for ($i=1; $i<=$#ARGV; $i++) {
print "?$ARGV[$i]";
}
} elsif ($mode eq "--unpack") {
@list = split(/\?/, $ARGV[0]);
foreach $i (@list) {
print "$i ";
}
} elsif ($mode eq "--print") {
foreach $i (@ARGV) {
print "$i ";
}
}
$HOME/util/myqsub.sh $HOME/util/your_script.sh whatever
Usually is it most convenient to put this command inside another script. In general, you will be running many similar jobs at once. It is common practice to put this call (using the 'system' command in a perl script, for example) into a loop in a script that calls all your jobs at once.
A template for this script follows. The template assumes that you have created two files:
To run it, you would enter the command:
your_script.sh whatever
The script your_script.sh should be something like this:
#!/bin/bash
# THIS SCRIPT TAKES ONE ARGUMENT: THE NAME OF THE INPUT FILE
# IT ASSUMES THAT THE NAME OF THE .TGZ FILE TO MOVE AND UNPACK IS ${1}.tgz
#
# set variables for data directories
#
# PWD is defined in myqsub.sh
export tempdir=$TMPDIR # temp space on local compute node
export inputdir=$PWD # location of input files
export outputdir=$PWD # location to put results into
export output=$TMPDIR/$1.outfile # file for redirected standard output
export error=$TMPDIR/$1.errfile # file for redirected standard error
# copy input datafiles to temporary space on compute node
# it is best to copy a single tar file instead of many files
# NOTE: I've used tar-zipped files throughout - you can change this to tar if you like
cp $inputdir/${1}.tgz $tempdir
# cd to your working directory
cd $tempdir
tar xzf ${1}.tgz
# put your commands to run here
# write output to temp space
# 1> $output 2> $error
$HOME/bin/delphi $1 1> $output 2> $error
#
# copy results back to fileserver
#
tar cfz result.$1.tgz *
cp result.$1.tgz $outputdir/
cd $outputdir
If you are going to run opt.pl from the computer on your desk, follow these instructions:
scp -P #### username@bioxcluster.stanford.edu:path/to/files new/local/path/to/files
scp -P #### local/path/to/files username@bioxcluster.stanford.edu:new/path/to/file
If you are going to run opt.pl from tree1 or cmgm or some other host, follow these instructions:
--> substitute your hostname (e.g. tree1.stanford.edu) for all occurrences of 'host' below
--> remember to use the right username with the right host
scp -P #### username@bioxcluster.stanford.edu:path/to/files new/local/path/to/files
scp username@host:path/to/files new/local/path/to/files
scp local/path/to/files username@host:new/path/to/files
scp -P #### local/path/to/files username@bioxcluster.stanford.edu:new/path/to/file