You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
New Implicit Solver interface with options to select Picard or Newton (JFNK) for the nonlinear solver. (BLAST-WarpX#4736)
* created Source/FieldSolver/WarXImplicitFieldsEM.cpp file to store functions used with the implicit solvers. Only has ComputeRHSE() and ComputeRHSB() functions thus far.
* OneStep_ImplicitPicard routine now uses ComputeRHSE and ComputeRHSB functions in place of EvolveE and EvolveB functions.
* created Source/Evolve/WarpXImplicitOps.cpp file to store common functions used by implicit solvers. Moved several functions from WarpXOneStepImplicitPicard.cpp to here.
* added NonlinearSolver:: enum. Options are picard and newton. Set from input file using algo.nonlinear_solver = picard, for example. Default is picard. It is not used yet.
* changed EvolveScheme::ImplicitPicard and SemiImplicitPicard to ThetaImplicit and SemiImplicit, respectively. This affects parsing from input file. algo.evolve_scheme = implicit_picard ==> theta_implicit. Have not updated Docs yet.
* NonlinearSolver ==> NonlinearSolverType
* intermediate commit. Added an ImplicitSolversEM class. WarpX owns an instance of this class. OneStepImplicitEM is now done here. This class owns all things pertaining to the implicit integrator. Also added a NonlinerSolvers class, but it is not used yet.
* cleaning up ImplicitSolverEM::OneStep().
* more refactoring of ImplicitSovlerEM class.
* WarpXFieldVec ==> WarpXSolverVec
* removed depricated functions. WarpXSolverVec has zero ghost.
* ImplicitSolverEM::OneStep now looks exactly like Picard solver. Next step is to move it there.
* ImplicitSolverEM::OneStep() now uses the Picard solver object to solve the nonlinear equation.
* changed where implicit solver parameters are printed.
* refactoring of WarpXImplicitOps.cpp
* added NewtonSolver.H file. Doesn't work yet.
* adding more functionality to WarpXSovlerVec class.
* added JacobianFunctionJFNK.H file in NonlinarSolvers/ folder. It contains all of the necessary functions required by the linear operator template parameter for AMReX_GMRES.
* dotMask object used for dot product from Linear Function class now lives in the implicit solver class. This ensures that 1 and only 1 instance of this object will be defined. dotProduct and norm can now be called through the implicit sovler class.
* moved temporary linear_function and GMRES testing lines out of Picard::Define() and into Newton::Define()
* intermediate commit. JFNK almost ready.
* small refactoring of PreRHSOp() and PostUpdateState() functions.
* cleaning things up.
* Newton solver runs. GMRES runs. Next step is to do Particle-suppressed (PS) part.
* minor clean up.
* fixed typo in convergence message for Newton and Picard solvers.
* changed how PostUpdateState() is used in implicit solver. Now parsing Picard particle solver parameters in ImplicitSolverEM class. Using a new formula for the epsilon used in the finite-difference Jacobian action calculation that is suitable for large absolute norms of the solution vector.
* Picard method for particle is now being used. PS-JFNK works.
* moved WarpXImplicitOps.cpp from Source/Evolve/ to Source/FieldSolvers/ImplicitSolvers/
* minor cleanup. PostUpdateState() ==> UpdateWarpXState().
* Moved the particle convergence check into its own function.
* added increment function to WarpXSolverVec class.
* removed some commented out lines in JacobianFunctionJFNK.H
* removed a_tol condition for print message when maximum iterations reached in nonlinear solvers. Newton solver iter is base zero now.
* cleaned up picard method for self-consistent particle update. added ablastr warning message for particles that don't converge after the maximum number of iterations.
* fixed small bug in PicardSolver related to absolute tolerance check for convegence. Default absolute tolerance values are set to zero.
* the mask used to compute the dot product of a WarpXSolverVec is now owned by the WarpXSolverVec class rather then the time solver class. It is a static member to avoid multiple definitions.
* defined accessors for Efield_fp and Bfield_fp Vectors owned by WarpX. ImplicitSolver is no longer a friend class of WarpX. Small tidy for PicardSolver.H.
* SemiImplicitEM and ThetaImplicitEM are now their own independent derived classes from the base ImplicitSolver class.
* added algorithm descriptions to the top of the SemiImplicitEM.cpp and ThetaImplicitEM.cpp files.
* updated appropriate files in Examples and Regression folders to reflect new changes.
* updating docs.
* JacobianFunctionJFNK.H ==> JacobianFunctionMF.H
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* need to call clear on the static vector m_dotMask owned by the WarpXSovlerVec class to avoid malloc_consolidate(): unaligned fastbin chunk detected error messages at simulation finish.
* moved WarpXSovlerVec class from ../ImplicitSolvers/Utils/ to ../ImplicitSolvers/. Utils directory is deleted.
* ImplicitPushXP: GPU Support for Convergence Test
* cleaning up.
* Atomic: Fix type of `1`
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* bug fix.
* Update Source/FieldSolver/ImplicitSolvers/WarpXSolverVec.H
Co-authored-by: Revathi Jambunathan <41089244+RevathiJambunathan@users.noreply.github.com>
* Remove Zero
* adding Copyright lines to top of files. Removed commented code.
* Prevent calling the implicit solver if not initialized
* More robust if conditions
* set implicit verbose to warpx verbose
* Update Source/FieldSolver/ImplicitSolvers/ThetaImplicitEM.cpp
* Simplify call to updates of E
* changed benchmarks_json file names as needed.
* using warpx.verbose
* clang-tidying
* changed header names.
* clang-tyding
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* more clang-tyding
* clang tidy again
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* UpdateElectricFiled ==> SetElectricFieldAndApplyBCs
* clang tidy
* more clang tidy
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* prohibiting copy and move constructors for solver classes.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fixed bug in move constructor in WarpXSovlerVec.
* slight refactoring of ThetaImplicitEM class.
* reducing divisions in Picard method for particles.
* small cosmetic changes to implicit solvers.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* removed commented out code.
* updating Docs and adding briefs.
* Fix HIP compilation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix const
* fixed indent. updated comments.
* New Python test: Particle-Boundary interaction (BLAST-WarpX#4729)
* enable the diagnostic of ParticleScraping in Python
* Update picmi.py
* Update picmi.py
* new test
* python update
* modification of the script
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update PICMI_inputs_rz.py
* json
* update
* Update PICMI_inputs_rz.py
* Update particle_boundary_interaction.json
* Update PICMI_inputs_rz.py
* Update PICMI_inputs_rz.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update PICMI_inputs_rz.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix hanging script in parallel
* Make the test executable
* Update analysis script
* Update particle_containers.py
* Update PICMI_inputs_rz.py
* Update analysis.py
* Update analysis.py
* Update particle_containers.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Adding normal components to regular boundary buffer (BLAST-WarpX#4742)
* first draft
* adding normal only
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update ParticleBoundaryBuffer.cpp
* Update ParticleBoundaryBuffer.cpp
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Add function to set domain boundary potentials from Python (BLAST-WarpX#4740)
* add function to set domain boundary potentials from Python
* switch function arguments to form `potential_[lo/hi]_[x/y/z]` and add to docs
* clean up `ablastr/fields` (BLAST-WarpX#4753)
* move PoissonInterpCPtoFP to Interpolate.H
* concatenate nested namespaces
* Split clang-tidy CI test into 4 to improve performances (BLAST-WarpX#4747)
* split clang-tidy checks to improve performances
* rename folders and tests
* fix concurrency
* Simplify
---------
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* Replace links to learn git (BLAST-WarpX#4758)
* Replace links to learn git
* Bugfix in `fields.py` for GPU run without `cupy` (BLAST-WarpX#4750)
* Bugfix in `fields.py` for GPU run without `cupy`
* apply suggestion from code review
* Release 24.03 (BLAST-WarpX#4759)
* AMReX: 24.03
* pyAMReX: 24.03
* WarpX: 24.03
* Implement stair-case Yee solver with EB in RZ geometry (BLAST-WarpX#2707)
* Allow compilation with RZ EB
* Do not push cells for RZ Yee solver, when covered with EB
* Fix compilation errors
* Fix additional compilation errors
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix additional compilation errors
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add automated test
* Add automated test
* Fix path in tests
* Enable parser in RZ
* Update example script
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Clean-up PR
* Initialize EB quantities
* Modified EM field initialization in 2D with EB
* Typo fix
* Typo fix
* Ignoring unused variables correctly
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Correct condition for updating E
* Correct update of B
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update B push
* Update input script
* Revert "Update input script"
This reverts commit 5087485.
* Update initialization
* Updated test
* Move test to a different folder
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add test for WarpX-test.ini
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix path for tests
* Update test description
* Update test metadata
* Add checksum file
* Revert changes
* Revert changes
* Change lx to lr
* Revert "Change lx to lr"
This reverts commit be3039a.
* Change lx to lr
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lgiacome <lorenzo.giacome@cern.ch>
* AMReX/pyAMReX/PICSAR: Weekly Update (BLAST-WarpX#4763)
* AMReX: Weekly Update
* pyAMReX: Weekly Update
* clean up (BLAST-WarpX#4761)
* Fix compilation
* updating some function names to contain Implicit.
* fixed bug that caused segfault on GMRES restart.
* parsing GMRES restart length from input file.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update field accessor
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* removed +, -, and * operators from WarpXSolverVec class. These operators encourage inefficient vector arithmetic.
* fix/workaround to field accessor issue.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* changing implicit-scheme related header files in WarpX.H and WarpX.cpp
* WarpX::max_particle_iterations ==> WarpX::max_particle_its_in_implicit_scheme and WarpX::particle_tolerance ==> WarpX:particle_tol_in_implicit_scheme
* updating docs.
* updating docs again.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* adding comments about the template parameters Vec and Ops used in the NonlinearSolver class.
* adding comments. addressing comments from PR.
* Ensure that laser particles can converge in the implicit solver
* Add braces to make clang-tidy happy
* moving nonlinear solver parameters to base ImplicitSolver class
* mirrors not to be used with implicit schemes.
* moved more to base implicit solver class. adding comments.
* removed some WarpXSolverVec functions. Updated comments.
* clang tidy complains when removing default copy constructor in WarpXSolverVec.H
* amrex::ExecOnFinalize(WarpXSolverVec::clearDotMask)
* WarpXSolverVec (const WarpXSolverVec&) = delete
* updating briefs for nonlinear solvers.
* adding loop over levels.
* static cast amrex::Vector.size() to int
* updating docs for nonlinear solvers.
* adding gmres.restart_length to docs.
* fixed typos in docs.
* Removed PreRHSOp() call from nonlinear solvers.
* clang tidy.
* Prohibit = operator for WarpXSolverVec. Using Copy() instead.
* Document PICMI function `LoadInitialField`
* updating comments in WarpXImplicitOps.cpp
* moved static member m_dotMask definition to the header file with inline added to the declaration.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fixed indent.
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Debojyoti Ghosh <debojyoti.ghosh@gmail.com>
Co-authored-by: Revathi Jambunathan <41089244+RevathiJambunathan@users.noreply.github.com>
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: Weiqun Zhang <weiqunzhang@lbl.gov>
Co-authored-by: Eya D <81635404+EyaDammak@users.noreply.github.com>
Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com>
Co-authored-by: Arianna Formenti <ariannaformenti@lbl.gov>
Co-authored-by: Luca Fedeli <luca.fedeli@cea.fr>
Co-authored-by: lgiacome <lorenzo.giacome@cern.ch>
* ``explicit``: Use an explicit solver, such as the standard FDTD or PSATD
88
88
89
-
* ``implicit_picard``: Use an implicit solver with exact energy conservation that uses a Picard iteration to solve the system.
90
-
Note that this method is for demonstration only. It is inefficient and does not work well when
91
-
:math:`\omega_{pe} \Delta t` is close to or greater than one.
92
-
The method is described in `Angus et al., On numerical energy conservation for an implicit particle-in-cell method coupled with a binary Monte-Carlo algorithm for Coulomb collisions <https://doi.org/10.1016/j.jcp.2022.111030>`__.
93
-
The version implemented is an updated version that is relativistically correct, including the relativistic gamma factor for the particles.
94
-
For exact energy conservation, ``algo.current_deposition = direct`` must be used with ``interpolation.galerkin_scheme = 0``,
95
-
and ``algo.current_deposition = Esirkepov`` must be used with ``interpolation.galerkin_scheme = 1`` (which is the default, in
96
-
which case charge will also be conserved).
97
-
98
-
* ``semi_implicit_picard``: Use an energy conserving semi-implicit solver that uses a Picard iteration to solve the system.
99
-
Note that this method has the CFL limitation :math:`\Delta t < c/\sqrt( \sum_i 1/\Delta x_i^2 )`. It is inefficient and does not work well or at all when :math:`\omega_{pe} \Delta t` is close to or greater than one.
89
+
* ``theta_implicit_em``: Use a fully implicit electromagnetic solver with a time-biasing parameter theta bound between 0.5 and 1.0. Exact energy conservation is achieved using theta = 0.5. Maximal damping of high-k modes is obtained using theta = 1.0. Choices for the nonlinear solver include a Picard iteration scheme and particle-suppressed (PS) JNFK.
90
+
The algorithm itself is numerical stable for large time steps. That is, it does not require time steps that resolve the plasma period or the CFL condition for light waves. However, the practicality of using a large time step depends on the nonlinear solver. Note that the Picard solver is for demonstration only. It is inefficient and will most like not converge when
91
+
:math:`\omega_{pe} \Delta t` is close to or greater than one or when the CFL condition for light waves is violated. The PS-JFNK method must be used in order to use large time steps. However, the current implementation of PS-JFNK is still inefficient because the JFNK solver is not preconditioned and there is no use of the mass matrices to minimize the cost of a linear iteration. The time step is limited by how many cells a particle can cross in a time step (MPI-related) and by the need to resolve the relavent physics.
92
+
The Picard method is described in `Angus et al., On numerical energy conservation for an implicit particle-in-cell method coupled with a binary Monte-Carlo algorithm for Coulomb collisions <https://doi.org/10.1016/j.jcp.2022.111030>`__.
93
+
The PS-JFNK method is described in `Angus et al., An implicit particle code with exact energy and charge conservation for electromagnetic studies of dense plasmas <https://doi.org/10.1016/j.jcp.2023.112383>`__ . (The version implemented in WarpX is an updated version that includes the relativistic gamma factor for the particles.) Also see `Chen et al., An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm. <https://doi.org/10.1016/j.jcp.2011.05.031>`__ .
94
+
Exact energy conservation requires that the interpolation stencil used for the field gather match that used for the current deposition. ``algo.current_deposition = direct`` must be used with ``interpolation.galerkin_scheme = 0``, and ``algo.current_deposition = Esirkepov`` must be used with ``interpolation.galerkin_scheme = 1``. If using ``algo.current_deposition = villasenor``, the corresponding field gather routine will automatically be selected and the ``interpolation.galerkin_scheme`` flag does not need to be specified. The Esirkepov and villasenor deposition schemes are charge-conserving.
95
+
96
+
* ``semi_implicit_em``: Use an approximately energy conserving semi-implicit electromagnetic solver. Choices for the nonlinear solver include a Picard iteration scheme and particle-suppressed JFNK.
97
+
Note that this method has the CFL limitation :math:`\Delta t < c/\sqrt( \sum_i 1/\Delta x_i^2 )`. The Picard solver for this method can only be expected to work well when :math:`\omega_{pe} \Delta t` is less than one.
100
98
The method is described in `Chen et al., A semi-implicit, energy- and charge-conserving particle-in-cell algorithm for the relativistic Vlasov-Maxwell equations <https://doi.org/10.1016/j.jcp.2020.109228>`__.
101
-
For energy conservation, ``algo.current_deposition = direct`` must be used with ``interpolation.galerkin_scheme = 0``,
102
-
and ``algo.current_deposition = Esirkepov`` must be used with ``interpolation.galerkin_scheme = 1`` (which is the default, in
When `algo.evolve_scheme` is either `implicit_picard` or `semi_implicit_picard`, this sets whether the iteration each step
117
-
is required to converge.
118
-
If it is required, an abort is raised if it does not converge and the code then exits.
119
-
If not, then a warning is issued and the calculation continues.
99
+
Exact energy conservation requires that the interpolation stencil used for the field gather match that used for the current deposition. ``algo.current_deposition = direct`` must be used with ``interpolation.galerkin_scheme = 0``, and ``algo.current_deposition = Esirkepov`` must be used with ``interpolation.galerkin_scheme = 1``. If using ``algo.current_deposition = villasenor``, the corresponding field gather routine will automatically be selected and the ``interpolation.galerkin_scheme`` flag does not need to be specified. The Esirkepov and villasenor deposition schemes are charge-conserving.
0 commit comments