Tophat2 Reporting output tracks failed
2
0
Entering edit mode
9.2 years ago
mbio.kyle ▴ 380

I am working on analysing some RNA sequence data using tophat2. The data comes from a variety of sources (SRA, CGHub, etc). I have both paired and non-paired data. I have consistently run into the following error:

[2015-02-13 16:13:23] Beginning TopHat run (v2.0.13)
-----------------------------------------------
[2015-02-13 16:13:23] Checking for Bowtie
          Bowtie version:     2.2.2.0
[2015-02-13 16:13:23] Checking for Bowtie index files (genome)..
[2015-02-13 16:13:23] Checking for reference FASTA file
[2015-02-13 16:13:23] Generating SAM header for #############3
[2015-02-13 16:13:23] Reading known junctions from GTF file
[2015-02-13 16:13:23] Preparing reads
     left reads: min. length=58, max. length=58, 32687348 kept reads (12356 discarded)
[2015-02-13 16:20:48] Building transcriptome data files #######
[2015-02-13 16:20:48] Building Bowtie index from ########.fa
[2015-02-13 16:20:49] Mapping left_kept_reads to transcriptome ########### with Bowtie2
[2015-02-13 16:27:48] Resuming TopHat pipeline with unmapped reads
[2015-02-13 16:27:48] Mapping left_kept_reads.m2g_um to genome ######## with Bowtie2
[2015-02-13 16:49:05] Mapping left_kept_reads.m2g_um_seg1 to genome ###### with Bowtie2 (1/2)
[2015-02-13 16:52:27] Mapping left_kept_reads.m2g_um_seg2 to genome ######## with Bowtie2 (2/2)
[2015-02-13 16:57:39] Searching for junctions via segment mapping
[2015-02-13 17:17:24] Retrieving sequences for splices
[2015-02-13 17:17:24] Indexing splices
[2015-02-13 17:17:25] Mapping left_kept_reads.m2g_um_seg1 to genome segment_juncs with Bowtie2 (1/2)
[2015-02-13 17:18:53] Mapping left_kept_reads.m2g_um_seg2 to genome segment_juncs with Bowtie2 (2/2)
[2015-02-13 17:20:46] Joining segment hits
[2015-02-13 17:20:46] Reporting output tracks
    [FAILED]
Error running /usr/bin/tophat_reports (.......)

I am running tophat with all default parameters on a debian linux workstation with more then adequate RAM, processor power and hard drive space. I am running tophat2 version: 2.0.13, I reinstalled it and built it from source. I have found many mentions of this error on various forums they are old, and seem to have been solved by a past update.

I have encountered the case where one sample from a group causes the error, while another completes without any error using the exact same parameters.

I am looking for any and all suggestions, tips or advice

Thanks!

alignment software-error tophat2 rna-seq • 5.4k views
ADD COMMENT
0
Entering edit mode

I had same problem and increasing memory(RAM) solved the issue. How are you sure you have more than adequate RAM?

ADD REPLY
0
Entering edit mode

I have 32GB on my workstation and I have been monitoring the progress via htop, the RAM useage does not seem to go over ~1GB. I am hoping 32GB is enough...

Thanks

ADD REPLY
0
Entering edit mode

The only thing you can do is look in the run log and execute the last command manually. Perhaps that will produce a more meaningful error message.

ADD REPLY
0
Entering edit mode

I had a problem with Tophat2 as I aligned 3' UTR derived reads and no junctions were found at all. How many junctions are reported by Tophat2 in your runs?

ADD REPLY
0
Entering edit mode
8.6 years ago

I encountered the same error using tophat 2.0.13.

I experienced this error when I was trying to map reads to a custom gene sequence. When I looked in the run.log file produced by tophat, I recognized that some of the parameters in the call to tophat_reports were missing. If you find the same thing, it is very likely that you have run into the same problem as me. You will find that even if you include those parameters manually, tophat will still fail to report output tracks.

This error can occur when none of your reads map. When this is the case, as it was in my situation, adding another gene sequence that reads do map to was an easy work-around.

-Michael

ADD COMMENT
0
Entering edit mode
8.0 years ago
kanika.151 ▴ 130

This error I got when my disk was 97% full so, you might have to make some space and make sure there are not any other jobs running which utilize a lot of RAM.

ADD COMMENT

Login before adding your answer.

Traffic: 2748 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6