STAR command gets Killed
1
0
Entering edit mode
7.0 years ago

Hello I am currently trying to map large files on a huge computing cluster and am getting an Error that's not very informative. I am submitting a job per: sbatch jobMUS1.sh --mem=1200000mb that calls another script (alignSTARpipeMUSViral2.s) that uses STAR for mapping for each of my fastq-files. The first STAR-call cancels with :

/naslx/projects/t1172/di36dih/Control/MUS/alignSTARpipeMUSViral2.sh: line 19: 12217 Killed /naslx/projects/t1172/di36dih/Linux_x86_64_static/STAR --readFilesIn $srr*fastq --genomeDir $genome --runThreadN 20 --quantMode GeneCounts --outReadsUnmapped Fastx --outFileNamePrefix $srr.mus.

It only shows the line of the STAR-call in the script , Killed and the STAR-call. Did anyone ever experience that kind of Error when working on a cluster? Is it an Error message from STAR or from the cluster? The memory should not be the problem.

STAR Mapping • 4.3k views
ADD COMMENT
2
Entering edit mode

If the job is actually being killed, it often means the wall time required for the job exceeds your allowance on the cluster. Are you sure your jobs are running within the scope of your allowances?

ADD REPLY
0
Entering edit mode

As the Sys-admin wrote me it should be within my allowances. I am currently checking on that. I have sent him the error-message too, but have no answer yet.

ADD REPLY
1
Entering edit mode

Are you asking for 1200000mb memory, which is 1200Gb, or 1.2Tb??? Do you really need all this?

ADD REPLY
0
Entering edit mode

I realised I can use less RAM when I am not using the genomeLoad parameter but not less than 800GB. I am executing two following STAR-commands with a large file of all bacteriagenomes, where the SA-File on its own has 160 GB size.

ADD REPLY
0
Entering edit mode

What do you intend to do? Is STAR the right tool for the job? What about using CLARK or KRAKEN?

ADD REPLY
1
Entering edit mode

Not a STAR user but --readFilesIn $srr*fastq does not look right. For PE data ir should --readFilesIn read1 read2 Is this a single end dataset?

ADD REPLY
0
Entering edit mode

It is single ended. I have built a job-submission for one single fastq-file :

#!/bin/bash
#SBATCH -o /naslx/projects/t1172/di36dih/Control/MUS/myjob.%j.%N.pipelineSTAR.out
#SBATCH -D /naslx/projects/t1172/di36dih/Control/MUS/GSE52583
#SBATCH -J teraMUS1
#SBATCH --clusters=serial
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=00:10:00

genome=/naslx/projects/t1172/di36dih/Genomes/mus
geneanno=/naslx/projects/t1172/di36dih/Genomes/mus/Mus_musculus.GRCm38.88.gtf
metagenome=/naslx/projects/t1172/di36dih/Genomes/bacteriaphageviroidvirus

#Align to MUS genome
/naslx/projects/t1172/di36dih/Linux_x86_64_static/STAR \
--readFilesIn SRR1033853.fastq \
--genomeDir $genome \
--runThreadN 20 \
--quantMode GeneCounts \
--outReadsUnmapped Fastx \
--outFileNamePrefix SRR1033853.mus.


# Rename outputs to contain SRR ID
mv SRR1033853.mus.ReadsPerGene.out.tab SRR1033853.mus.tab

# Align to viral genome
/naslx/projects/t1172/di36dih/Linux_x86_64_static/STAR \
--genomeDir $metagenome \
--readFilesIn SRR1033853.mus.Unmapped* \
--runThreadN 20 \
--outSAMtype BAM SortedByCoordinate \
--outFilterMultimapNmax 11289 \
--outFileNamePrefix SRR1033853viral.

Even with that singel call i get the error message

ADD REPLY
2
Entering edit mode

I think this may be your problem. #SBATCH --time=00:10:00. That is only 10 mins request. If you meant to ask for 10h then it should have been #SBATCH --time=10:00:00

Did you job get killed right away or after 10 min?

ADD REPLY
0
Entering edit mode

It would also be wise to quote your filepath strings, i.e change genome=/naslx/projects/t1172/di36dih/Genomes/mus to genome='/naslx/projects/t1172/di36dih/Genomes/mus'

ADD REPLY
0
Entering edit mode

It sadly got killed right away and the paths should be fine.

ADD REPLY
0
Entering edit mode

Was the error you originally posted the only thing in /naslx/projects/t1172/di36dih/Control/MUS/myjob.%j.%N.pipelineSTAR.out ? If not can you take a look at that file and see if there are additional error messages?

Keep in mind that you will need to change the time request for any additional jobs you run later.

ADD REPLY
1
Entering edit mode

HPC clusters all tend to have their own rules, features and etc. I have a feeling this might be a problem with how you're trying to run the job and not with STAR. Your cluster should have some kind of support team that should be better equipped to handle this issue.

Assuming you're running Slurm, there should be some other tools available to determine why the job was killed.

ADD REPLY
0
Entering edit mode

Do you need to use STAR specifically? Can you not just use a standard aligner and see if that job completes?

ADD REPLY
0
Entering edit mode
7.0 years ago

I got the problem solved, it wasn't STAR but the HPC cluster. It appears there was a wrong documentation on the site of the HPC-cluster and the Slurm call had a mistake. It is now running.

ADD COMMENT
0
Entering edit mode

Which SLURM call was incorrect?

ADD REPLY

Login before adding your answer.

Traffic: 2721 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6