Filter reversed rows in a dataframe (pandas)
0
0
Entering edit mode
6.0 years ago
Chvatil ▴ 130

I actually have a problem with my dataframe, indeed let's say I have a dataframe such:

    cluster_name    qseqid  sseqid  pident_x    qstart  qend    sstar   send
2   1   seq1_0035_0035  seq13_0042_0035 0.73    42  133 46  189
3   1   seq1_0035_0035  seq13_0042_0035 0.73    146 283 287 389
4   1   seq1_0035_0035  seq13_0042_0035 0.73    301 478 402 503
5   1   seq13_0042_0035 seq1_0035_0035  0.73    46  189 42  133
6   1   seq13_0042_0035 seq1_0035_0035  0.73    287 389 146 283
7   1   seq13_0042_0035 seq1_0035_0035  0.73    402 503 301 478
8   2   seq4_0042_0035  seq2_0035_0035  0.71    256 789 125 678
9   2   seq4_0042_0035  seq2_0035_0035  0.71    802 1056    706 985
10  2   seq4_0042_0035  seq7_0035_0042  0.83    123 745 156 723
12  4   seq11_0035_0035 seq14_0042_0035 0.89    145 647 236 921
13  4   seq11_0035_0035 seq17_0042_0042 0.97    148 623 241 1002
14  5   seq17_0035_0042 seq17_0042_0042 0.94    188 643 179 746
...
(200 000 ROWS)

As you can see in the lines 2,3 and 4 in the cluster 1:

 cluster_name    qseqid  sseqid  pident_x    qstart  qend    sstar   send
2   1   seq1_0035_0035  seq13_0042_0035 0.73    42  133 46  189
3   1   seq1_0035_0035  seq13_0042_0035 0.73    146 283 287 389
4   1   seq1_0035_0035  seq13_0042_0035 0.73    301 478 402 503

there are the recriprocal (reversed) ones in 5, 6 and 7 in the cluster 1:

 cluster_name    qseqid  sseqid  pident_x    qstart  qend    sstar   send
1   1   seq13_0042_0035 seq1_0035_0035  0.73    46  189 42  133
6   1   seq13_0042_0035 seq1_0035_0035  0.73    287 389 146 283
7   1   seq13_0042_0035 seq1_0035_0035  0.73    402 503 301 478

If you do not see it well it would be like:

   seq 1 vs seq 2 : (here seq1_0035_0035 vs seq13_0042_0035)

and

seq 2 vs seq 1 : ( here seq13_0042_0035 vs seq1_0035_0035)

It is the same comparaison and I do want to keep one of them only, not both.

Here for exemple, it's to only keep 2,3 and 4 OR 4,5 and 6 in my final dataframe, if I only keep the first ones it would give:

    cluster_name    qseqid  sseqid  pident_x    qstart  qend    sstar   send
2   1   seq1_0035_0035  seq13_0042_0035 0.73    42  133 46  189
3   1   seq1_0035_0035  seq13_0042_0035 0.73    146 283 287 389
4   1   seq1_0035_0035  seq13_0042_0035 0.73    301 478 402 503
8   2   seq4_0042_0035  seq2_0035_0035  0.71    256 789 125 678
9   2   seq4_0042_0035  seq2_0035_0035  0.71    802 1056    706 985
10  2   seq4_0042_0035  seq7_0035_0042  0.83    123 745 156 723
12  4   seq11_0035_0035 seq14_0042_0035 0.89    145 647 236 921
13  4   seq11_0035_0035 seq17_0042_0042 0.97    148 623 241 1002
14  5   seq17_0035_0042 seq17_0042_0042 0.94    188 643 179 746

as you can also see, 8 and 9 are the same sequences but they do not have the same seq coordinates (qstart, qend, sstar and send), then I keep them.

I already asked for such a thing but I actually have a dataframe with around 200 000 rows and I would like to find a solution wich is not to much time conssuming and also do not take to much memory because with my previous solutions it took to much time and memory...

Does someone have an idea?

Thank you very much for your help.

the first try was:

df[~pd.DataFrame({
    'tup': df[['sseqid', 'qseqid']].apply(tuple, axis=1), 
    'inv_tups': df[['qseqid', 'sseqid']].apply(lambda t: (tuple(t), ), axis=1).cumsum().shift(1)}
).apply(lambda r: isinstance(r.inv_tups, tuple) and r.tup in r.inv_tups, axis=1)]

but it takes to much time for 200 000 rows.

and the other one was:

this = data[["qseqid", "sseqid"]].apply(tuple, axis=1)
cum = pd.get_dummies(data[["sseqid", 'qseqid']].apply(tuple, axis=1)).cumsum()

this_zeros = pd.get_dummies(this)
this_zeros[:] = 0
pd.concat([cum, this_zeros[this_zeros.columns.difference(cum.columns)]], axis=1)
keep = pd.concat([cum, this_zeros[this_zeros.columns.difference(cum.columns)]], axis=1).lookup(data.index, this)

data=data[keep.astype(bool)]

and it takes to much memory.

pandas python filtering • 1.1k views
ADD COMMENT

Login before adding your answer.

Traffic: 2350 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6