how I can come up with a permanent error
0
0
Entering edit mode
7.0 years ago
zizigolu ★ 4.3k

hi,

after too much googling I was not able to solve this error

mycounts <- read.table("miRNA.txt", header = T, sep = "\t")  

> rownames(mycounts) <- mycounts[ , 1]
Error in `row.names<-.data.frame`(`*tmp*`, value = value) : 
  duplicate 'row.names' are not allowed

> head(mycounts[,1:4])
             X GSM381739 GSM381740 GSM381741
1 hsa-miR-154*  5.695187  5.673377  5.737937
2  hsa-miR-30b  5.727894  5.653218  5.664400
3  hsa-miR-379  5.731220  5.749713  5.679234
4 hsa-miR-517b  5.719527  5.655973  5.746567
5  hsa-miR-634  6.119763  5.886617  5.776004
6  hsa-miR-539  5.683806  5.836444  5.850200
> tail(mycounts[,1:4])
                     X GSM381739 GSM381740 GSM381741
1548       hsa-miR-553  5.742557  5.804803  5.762452
1549 kshv-miR-K12-6-3p  5.698515  5.713168  5.643890
1550       hsa-miR-570  5.787095  5.627410  5.700126
1551 kshv-miR-K12-4-3p  5.711230  5.646727  5.663816
1552       hsa-miR-802  5.640715  5.717633  5.656335
1553       hsa-miR-581  5.576380  5.753701  5.600545
> dim(mycounts)
[1] 1553   37
> any(duplicated(colnames(mycounts)))
[1] FALSE
> which(duplicated(mycounts))
integer(0)
>
R software error • 1.7k views
ADD COMMENT
2
Entering edit mode

can you share the miRNA.txt file?

also

any(duplicated(colnames(mycounts)))
[1] FALSE

shouldn't it be

any(duplicated(rownames(mycounts)))

and therefore

which(duplicated(mycounts[,1]))
ADD REPLY
0
Entering edit mode

thank you

I am trying to share my file, really a big tackle for me

> any(duplicated(rownames(mycounts)))
[1] FALSE
ADD REPLY
0
0
Entering edit mode

this is link of my data please consider

ADD REPLY
0
Entering edit mode

That does not look like a link to a data file.

ADD REPLY
0
Entering edit mode

yes all right, I am trying to share in gogle derive but no share with people option. I will go on trying to share my file

ADD REPLY
1
Entering edit mode

Right click on file in google drive and choose get shareable link. Paste that link above. Remember to turn off that link once your question is answered.

ADD REPLY
0
Entering edit mode

sorry,

this is address of my file, may you please inspect that

http://s000.tinyupload.com/?file_id=19534889773988350216

ADD REPLY
1
Entering edit mode

What about using row.names = NULL? From the help page:

row.names: a vector of row names. This can be a vector giving the actual row names, or a single number giving the column of the table which contains the row names, or character string giving the name of the table column containing the row names.

If there is a header and the first row contains one fewer field than the number of columns, the first column in the input is used for the row names. Otherwise if ‘row.names’ is missing, the rows are numbered.

Using ‘row.names = NULL’ forces row numbering. Missing or ‘NULL’ ‘row.names’ generate row names that are considered to be ‘automatic’ (and not preserved by ‘as.matrix’).

ADD REPLY
0
Entering edit mode

thank you so much

> mycounts <- read.table("miRNA.txt", header = T, sep = "\t",**row.names = NULL**)
> View(mycounts)
> rownames(mycounts) <- mycounts[ , 1]
Error in `row.names<-.data.frame`(`*tmp*`, value = value) : 
  duplicate 'row.names' are not allowed
ADD REPLY
0
Entering edit mode

Oh, I thought the error was from read table but it's from a line lower. My bad.

What about any(duplicated(colnames(mycounts[,1])))? That's the column you try to make a rowname, but turns out to contain duplicates.

This actually is more of a programming issue, and not really bioinformatics.

ADD REPLY
0
Entering edit mode
> any(duplicated(colnames(mycounts[,1])))
[1] FALSE
>

thank you this error kept me stopped for weeks

ADD REPLY
1
Entering edit mode

I downloaded your file to have a look, and your first column really contains duplicates. Actually, lots of duplicates.
Check it out for yourself:

cut -f1 miRNA.txt | sort -u | wc -l

=> 555 unique identifiers

cut -f1 miRNA.txt | wc -l

=> 1554 identifiers in total

Use the following to get all duplicated identifiers

cut -f1 miRNA.txt | sort | uniq -d

(527 identifiers are used more than once)

Note that you can also do this easily in excel, using mark duplicates.

ADD REPLY
0
Entering edit mode

Thanks a lot

after removing duplicates error went out

ADD REPLY
0
Entering edit mode

Okay, so this is solved, but are you sure you didn't remove data by removing duplicates? How did those duplicates get there in the first place? I have no idea how you obtained the data and which analysis you aim to perform.

ADD REPLY
0
Entering edit mode

@F: you should consider these excellent questions carefully before moving forward.

ADD REPLY
0
Entering edit mode

exactly I removed duplicates in excel then error went.

I downloaed GSE15288 series and normalized them by a link you seggested in an older post

http://matticklab.com/index.php?title=Single_channel_analysis_of_Agilent_microarray_data_with_Limma

then I removed unnecessary columns but when I tried to assign row.names I got error, by removing duplicates error went.

I was going to calcuate correlation between miRNAs and mRNA for human hepatitis c virus (HCV) liver biopsy samples by miRComb R package. I also got the same error for mRNA file but I only asked question for miRNA file. this error for a month does not let me go further.

ADD REPLY
0
Entering edit mode

I now took a random duplicate identifier, dmr_308. If I look at both records (rows) in the file, it's clear that those are different. So while the identifier is a duplicate, the data isn't. (The same is true for other duplicate entries).

I don't know enough about microarrays to figure out why you have the same identifier (gene?) multiple times, maybe because multiple probes target the same gene. I'm an RNA-seq guy, too young for microarrays ;-)

ADD REPLY
0
Entering edit mode

thank you for your time,

in normalization by

y.ave <- avereps(y, ID=y$genes$ProbeName)

I mapped probsets to gene name

however thanks

ADD REPLY

Login before adding your answer.

Traffic: 2477 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6