Question: GWAS - Relationship between Standard Error and P-value
0
Entering edit mode

Is there a relationship between the p-values obtained in a GWAS and the standard error of the effect size of a SNP that can that can be explained either explicitly or intuitively? Methods for prediction based on effect sizes, such as PRSice, don't incorporate the standard error into the prediction model; is this because it is too complicated to do algorithmically or is it too difficult to define a "bad" standard error to prune out?

Are we in any way filtering for "good" SEs by filtering by P-value during PRS; is there a relationship between SE and P-value such that we might expect our top SNPs to have SEs that are not very large relative to its effect size?

ADD COMMENTlinkeditmoderate 8 months ago oceallc • 0 • updated 8 months ago Sam ♦ 2.3k
0
Entering edit mode

The P-value is in fact a function of the effect size and the standard error. You can calculate the z-score by dividing the beta by the standard error, then transform the z-score into p-value directly.

One problem with the standard error is that it is correlated with the minor allele frequency of the SNP. So if the MAF in the GWAS sample differs from those in the target data, using the standard error will introduce bias. On the other hand, if the MAF is similar between the two dataset, then using the standard error might help to improve the mdoel.

ADD COMMENTlinkeditmoderate 8 months ago Sam ♦ 2.3k

Login before adding your answer.

Powered by the version 2.0