-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run time, number of variants in model, and number of cores used #1
Comments
Thanks for your interest in MRCI. Since MRCI requires LD info during estimation, MRCI will match the input SNPs with the 1KG common SNPs for which the LD data is pre-calculated. Thus, the final input will usually include ~1M SNPs. Compared with traditional MR methods (which usually consider a few significant independent SNPs), MRCI indeed takes a much longer time to finish the analysis. We run MRCI under R-3.5.1, and it usually takes ~10 hours using 12 threads in a server, but the actual time also depends on the occupation of the server. Additionally, the "roptim" package seems to work much slower under a higher version of R based on my experience. Could you try it again under R-3.5.1 ? Hope this will help. |
Thanks for your reply!
I am running the analysis using R v3.6.0 with bioconductor. In my example, running on a server with 40 cores, the job got killed after 24 hours and did not seem close to finishing.
I have just resubmitted using R 3.5.0 (no v 3.5.1 available on my server).
I will let you know if this works!
Best,
Sean
… On 22 Oct 2023, at 06:23, zpliu ***@***.***> wrote:
Thanks for your interest in MRCI.
Since MRCI requires LD info during estimation, MRCI will match the input SNPs with the 1KG common SNPs for which the LD data is pre-calculated. Thus, the final input will usually include ~1M SNPs. Compared with traditional MR methods (which usually consider a few significant independent SNPs), MRCI indeed takes a much longer time to finish the analysis.
We run MRCI under R-3.5.1, and it usually takes ~10 hours using 12 threads in a server, but the actual time also depends on the occupation of the server. Additionally, the "roptim" package seems to work much slower under a higher version of R based on my experience.
Could you try it again under R-3.5.1 ? Hope this will help.
—
Reply to this email directly, view it on GitHub <#1 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AP4KW744JC5DC5RL3YCZYQTYASNT5AVCNFSM6AAAAAA56P3OHOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZTHE4DSNZZG4>.
You are receiving this because you authored the thread.
|
Hi!
Thanks for creating this very interesting tool and I am very curious to produce some results using your approach. However, it seems that running the tool takes a very long time, even with 60 cores, for me.
Can you perhaps give some insight into the expected run time, the number of cores and the typical number of variants included in the model in your real data examples? I am currently trying to run the model using summary statistics that contain ~4-8M variants, while using 20-60 cores. Is this reasonable? Is any more filtering needed?
Any tips would be much appreciated.
Best,
Sean
The text was updated successfully, but these errors were encountered: