Biostar Beta. Not for public use.
Question: STAR command gets Killed
0
Entering edit mode

Hello I am currently trying to map large files on a huge computing cluster and am getting an Error that's not very informative. I am submitting a job per: sbatch jobMUS1.sh --mem=1200000mb that calls another script (alignSTARpipeMUSViral2.s) that uses STAR for mapping for each of my fastq-files. The first STAR-call cancels with :

/naslx/projects/t1172/di36dih/Control/MUS/alignSTARpipeMUSViral2.sh: line 19: 12217 Killed /naslx/projects/t1172/di36dih/Linux_x86_64_static/STAR --readFilesIn $srr*fastq --genomeDir $genome --runThreadN 20 --quantMode GeneCounts --outReadsUnmapped Fastx --outFileNamePrefix $srr.mus.

It only shows the line of the STAR-call in the script , Killed and the STAR-call. Did anyone ever experience that kind of Error when working on a cluster? Is it an Error message from STAR or from the cluster? The memory should not be the problem.

ADD COMMENTlink 2.8 years ago cakesebastian • 0
Entering edit mode
2

If the job is actually being killed, it often means the wall time required for the job exceeds your allowance on the cluster. Are you sure your jobs are running within the scope of your allowances?

ADD REPLYlink 2.9 years ago
Joe
12k
Entering edit mode
0

As the Sys-admin wrote me it should be within my allowances. I am currently checking on that. I have sent him the error-message too, but have no answer yet.

ADD REPLYlink 2.9 years ago
cakesebastian
• 0
Entering edit mode
1

Are you asking for 1200000mb memory, which is 1200Gb, or 1.2Tb??? Do you really need all this?

ADD REPLYlink 2.9 years ago
h.mon
25k
Entering edit mode
0

I realised I can use less RAM when I am not using the genomeLoad parameter but not less than 800GB. I am executing two following STAR-commands with a large file of all bacteriagenomes, where the SA-File on its own has 160 GB size.

ADD REPLYlink 2.9 years ago
cakesebastian
• 0
Entering edit mode
0

What do you intend to do? Is STAR the right tool for the job? What about using CLARK or KRAKEN?

ADD REPLYlink 2.9 years ago
h.mon
25k
Entering edit mode
1

Not a STAR user but --readFilesIn $srr*fastq does not look right. For PE data ir should --readFilesIn read1 read2 Is this a single end dataset?

ADD REPLYlink 2.9 years ago
genomax
68k
Entering edit mode
0

It is single ended. I have built a job-submission for one single fastq-file :

#!/bin/bash
#SBATCH -o /naslx/projects/t1172/di36dih/Control/MUS/myjob.%j.%N.pipelineSTAR.out
#SBATCH -D /naslx/projects/t1172/di36dih/Control/MUS/GSE52583
#SBATCH -J teraMUS1
#SBATCH --clusters=serial
#SBATCH --get-user-env
#SBATCH --export=NONE
#SBATCH --time=00:10:00

genome=/naslx/projects/t1172/di36dih/Genomes/mus
geneanno=/naslx/projects/t1172/di36dih/Genomes/mus/Mus_musculus.GRCm38.88.gtf
metagenome=/naslx/projects/t1172/di36dih/Genomes/bacteriaphageviroidvirus

#Align to MUS genome
/naslx/projects/t1172/di36dih/Linux_x86_64_static/STAR \
--readFilesIn SRR1033853.fastq \
--genomeDir $genome \
--runThreadN 20 \
--quantMode GeneCounts \
--outReadsUnmapped Fastx \
--outFileNamePrefix SRR1033853.mus.


# Rename outputs to contain SRR ID
mv SRR1033853.mus.ReadsPerGene.out.tab SRR1033853.mus.tab

# Align to viral genome
/naslx/projects/t1172/di36dih/Linux_x86_64_static/STAR \
--genomeDir $metagenome \
--readFilesIn SRR1033853.mus.Unmapped* \
--runThreadN 20 \
--outSAMtype BAM SortedByCoordinate \
--outFilterMultimapNmax 11289 \
--outFileNamePrefix SRR1033853viral.

Even with that singel call i get the error message

ADD REPLYlink 2.9 years ago
cakesebastian
• 0
• updated 2.9 years ago
genomax
68k
Entering edit mode
2

I think this may be your problem. #SBATCH --time=00:10:00. That is only 10 mins request. If you meant to ask for 10h then it should have been #SBATCH --time=10:00:00

Did you job get killed right away or after 10 min?

ADD REPLYlink 2.9 years ago
genomax
68k
Entering edit mode
0

It would also be wise to quote your filepath strings, i.e change genome=/naslx/projects/t1172/di36dih/Genomes/mus to genome='/naslx/projects/t1172/di36dih/Genomes/mus'

ADD REPLYlink 2.8 years ago
Joe
12k
Entering edit mode
0

It sadly got killed right away and the paths should be fine.

ADD REPLYlink 2.8 years ago
cakesebastian
• 0
Entering edit mode
0

Was the error you originally posted the only thing in /naslx/projects/t1172/di36dih/Control/MUS/myjob.%j.%N.pipelineSTAR.out ? If not can you take a look at that file and see if there are additional error messages?

Keep in mind that you will need to change the time request for any additional jobs you run later.

ADD REPLYlink 2.8 years ago
genomax
68k
Entering edit mode
1

HPC clusters all tend to have their own rules, features and etc. I have a feeling this might be a problem with how you're trying to run the job and not with STAR. Your cluster should have some kind of support team that should be better equipped to handle this issue.

Assuming you're running Slurm, there should be some other tools available to determine why the job was killed.

ADD REPLYlink 2.9 years ago
pld
4.8k
Entering edit mode
0

Do you need to use STAR specifically? Can you not just use a standard aligner and see if that job completes?

ADD REPLYlink 2.8 years ago
Joe
12k
0
Entering edit mode

I got the problem solved, it wasn't STAR but the HPC cluster. It appears there was a wrong documentation on the site of the HPC-cluster and the Slurm call had a mistake. It is now running.

ADD COMMENTlink 2.8 years ago cakesebastian • 0
Entering edit mode
0

Which SLURM call was incorrect?

ADD REPLYlink 2.8 years ago
genomax
68k

Login before adding your answer.

Similar Posts
Loading Similar Posts
Powered by the version 2.0