hello, I am using "centrifuge-build" tool for nano-pore meta-genomics sequencing. The configuration i have in my system is 256 GB of RAM, 40 CORES for usage. When we start indexing in the process by giving specific amounts of cores. After some period of time memory % is gradually increasing and some time further in the process, it is consuming 100% of memory and process is getting hanged and finally getting halted. So please suggested a method to use memory. In the tool we have come across certain parameters for memory usage like "bmax", and "dcv". But i am very much confused about the parameters in the process and how to implement it. Please help me out. thanks in advance :)
Please make it easier for us and add a link to the tool you are talking about. It is also more likely that the developers themselves can help you, for which a GitHub issue would be the most appropriate.
As mentioned by you in your reply, i have posted the same question in "github" .
And this is the link for the "Centrifuge-build" tool - https://ccb.jhu.edu/software/centrifuge/manual.shtml. Awaiting for your response.
thank you
Hello, I am trying to index the NCBI nt database using "centrifuge-build" tool, but I am also running out of memory. I'm running it on 16 CORES, 360GB RAM system but getting the error OUT_OF_MEMORY.
I'm using: centrifuge-build -p 16 --bmax 1342177280 --conversion-table gi_taxid_nucl.map --taxonomy-tree taxonomy/nodes.dmp --name-table taxonomy/names.dmp nt.fa nt
Any suggestions?