Is it possible to obtain groupby style counts without collapsing Pandas DataFrame?


Keywords:python 


Question: 

I have a DataFrame with 9 columns, and I'm trying to add a column of counts of unique values based on the first 3 columns (e.g. Cols A, B, and C, must match to count as a unique value , but the remaining columns can vary. I attempted to do this as with groupby:

df = pd.DataFrame(resultsFile500.groupby(['chr','start','end']).size().reset_index().rename(columns={0:'count'}))

This returns a DataFrame with 5 columns, and the counts are what I want. However, I also need values from the original data frame, so what I have been trying to do is somehow get those values of counts as a column in the original df. So, this would mean that if two rows in columns chr, start, and end, had identical values, the counts column would be 2 in both rows, but they would not be collapsed to one row. Is there an easy solution here that I'm missing, or do I need to hack something together?


1 Answer: 

You can use .transform to get non-collapsing behavior:

>>> df
   a  b  c  d  e
0  3  4  1  3  0
1  3  1  4  3  0
2  4  3  3  2  1
3  3  4  1  4  0
4  0  4  3  3  2
5  1  2  0  4  1
6  3  1  4  2  1
7  0  4  3  4  0
8  1  3  0  1  1
9  3  4  1  2  1
>>> df.groupby(['a','b','c']).transform('count')
   d  e
0  3  3
1  2  2
2  1  1
3  3  3
4  2  2
5  1  1
6  2  2
7  2  2
8  1  1
9  3  3
>>>

Note, i'll have to choose an arbitrary column from the .transform result, but then just do:

>>> df['unique_count'] = df.groupby(['a','b','c']).transform('count')['d']
>>> df
   a  b  c  d  e  unique_count
0  3  4  1  3  0             3
1  3  1  4  3  0             2
2  4  3  3  2  1             1
3  3  4  1  4  0             3
4  0  4  3  3  2             2
5  1  2  0  4  1             1
6  3  1  4  2  1             2
7  0  4  3  4  0             2
8  1  3  0  1  1             1
9  3  4  1  2  1             3