-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PBC completion super slow #11
Comments
A subprocess call is quite suboptimal and probably unnecessary. Where is the routine you use now? |
It is the MDAnalysis atomgroup routine. I think can be faster by disabling some checks, running it on the complete position matrix instead of per selection. Stuff like that. |
The fastest will be to have a copy of the coordinates in reciprocal space and then for every selection subtract the first coordinate from the rest and add pi, then floor everything to the 0-2 pi domain and subtract pi, convert to regular coordinates and add the first particle. This will ensure that every group of atoms forming a bead is taken as most compact. You'll have to do it again for bonded terms, but the procedure is quite cheap with numpy and easily implemented. If you want, I can add it. Which file is it? |
If it is cheap, why not. Based on the MDAnalysis it cannot be much slower and people could still do it with GROMACS. So the code that does the mapping is this one: fast_forward/fast_forward/mapping.py Lines 105 to 118 in 1f26d30
Positions is a n_frames x n_atoms x 3 numpy matrix and the function is accelerated by numba. To get best numba performance it is best to write as simple and plain python as possible and omit numpy magic as much as possible within this function. I leave it up to you if you first want to generate the coordinates in reciprocal space and then run this function or do everything within this function. |
There should be a faster way to do the PBC completion. Subprocess call to GROMACS?
The text was updated successfully, but these errors were encountered: