Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Implement Poisson solver based on integrated Green functions #4611

Closed
wants to merge 29 commits into from

Conversation

RemiLehe
Copy link
Member

@RemiLehe RemiLehe commented Jan 14, 2024

This uses the method from:
https://journals.aps.org/prab/abstract/10.1103/PhysRevSTAB.9.044204
https://journals.aps.org/prab/pdf/10.1103/PhysRevSTAB.10.129901

Because this uses the same FFT infrastructure as PSATD, right now, the code needs to be compiled with:
-DWarpX_PSATD=ON.

Also: since the solver uses a global FFT, right now only one MPI rank participates in the FFT solve (but this MPI rank can still use a full GPU) ; this could be improved later by using heFFTe. The rest of the PIC operations (charge deposition, gather, etc.) are still performed by the different MPI ranks in parallel.

Comparison with Basseti-Erskine formula (see the test script in this PR):
Screenshot 2024-01-14 at 2 14 58 PM

TODO:

  • Test on GPU
  • Add back the relativistic factor in dx
  • Overcome limitation with single MPI rank (use ParallelCopy)
  • Fix bug in the multi-GPU implementation, in the case grid_type = collocated (this could be done with the regular Poisson solver instead of the relativistic one, so that we can output phi)
  • Add timers to the FFT and the calculation of the IGF
  • Finalize the automated test
  • Decide on the user interface: introduce a new parameter “warpx.poisson_solver” that can be set to “multigrid”, “igf”, etc. + how to handle compatibility with boundary conditions?
    This should follow the same logic as the Maxwell solver. In the PICMI interface, there is already a “method” parameter for the Poisson solver.
  • add new boundary condition “open” that is only valid for Poisson-like field solvers + organize documentation to group boundaries in similar types
  • Abort if someone tries to use mesh refinement

  • Properly set guard cells of phi outside of the global domain (useful for grid_type = collocated)

  • Save the FFT plans in a permanent object, so that we do not need to recompute them. Probably not needed. From the timers, it seems that the creation and destruction of FFT plan does not take any significant time.

  • Clean-up ; move code to a different file, maybe IntegratedGreenFunctionSolver.H, in ablastr

  • Handle dimensionality that are not 3D, abort if we are not in 3D

  • Address warning

/Users/rlehe/Documents/codes/warpx_directory/warpx/Source/ablastr/fields/PoissonSolver.H:69:1: warning: function 'computePhiIGF' is not needed and will not be emitted [-Wunneeded-internal-declaration]
computePhiIGF ( amrex::MultiFab const & rho,

By calling the function in VectorPoissonSolver.H (for the magnetostatic solver)

  • Fix ParallelCopy error that appears in DEBUG mode:
Assertion `boxArray().ixType() == src.boxArray().ixType()' failed, file "/global/u2/a/aforment/src/warpx/build_pm_gpu_fft/_deps/fetchedamrex-src/Src/Base/AMReX_FabArrayCommI.H", line 330 !!!

@ax3l ax3l added component: electrostatic electrostatic solver component: ABLASTR components shared with other PIC codes labels Jan 17, 2024
}

void
computePhiIGF ( amrex::MultiFab const & rho,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the function body of large functions like this can be moved to cpp files.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. That's a good point.

amrex::Real const y = j0*dy;
amrex::Real const z = k0*dz;

amrex::Real const G_value = 1./(4*MathConst::pi*PhysConst::ep0) * (
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ep::0 should be removed here, so that we can use this function also in the magnetostatic solver (same as with the multi-grid solver)

Comment on lines 63 to 70
+ 0.5 * x*z*std::log( (1 + y/r)/(1 - y/r) );
if (z >= 0) {
G += x*y*std::log( z + r );
} else {
// When z is negative and much larger than x and y (in absolute value), floating point operation can cause issues
// Thus, here we use z+r = (z+r)(r-z)/(r-z)
G += x*y*std::log( (x*x + y*y)/(r-z) );
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once we have automated tests, this whole block of code could probably be replaced by:

Suggested change
+ 0.5 * x*z*std::log( (1 + y/r)/(1 - y/r) );
if (z >= 0) {
G += x*y*std::log( z + r );
} else {
// When z is negative and much larger than x and y (in absolute value), floating point operation can cause issues
// Thus, here we use z+r = (z+r)(r-z)/(r-z)
G += x*y*std::log( (x*x + y*y)/(r-z) );
}
+ 0.5 * x*z*std::log( (1 + y/r)/(1 - y/r) )
+ 0.5 * x*y * std::sign(z) * std::log( (r+std::abs(z))*(r+std::abs(z))/(x*x + y*y) );

(from the symmetrized form of the Green function)

@RemiLehe
Copy link
Member Author

Superseded by #4648

@RemiLehe RemiLehe closed this Jan 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: ABLASTR components shared with other PIC codes component: electrostatic electrostatic solver
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants