Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom coef solver #2

Open
wants to merge 3,484 commits into
base: master
Choose a base branch
from
Open

Conversation

cdmccombs
Copy link

No description provided.

EZoni and others added 30 commits October 11, 2024 14:26
Could we do this to make sure that we run the GitHub Actions and Azure
jobs (build, test) only if _at least one file outside the_ `Docs`
_directory_ is modified, i.e., skip those jobs if only files in the
`Docs` directory are modified?

I think it would be safe to do so (and a bit of a waste of resources to
not do so...), but I leave it open for discussion.

If merged, we could test this rebasing BLAST-WarpX#5386 and seeing if the correct
CI jobs are skipped.

Note that this PR leaves the other CI jobs untouched, e.g., `source`,
`docs`, `CodeQL`, etc.
This environment variable was used for Perlmutter when `--cpus-per-task=N` did not work yet. It was copied around to other templates.

These days, `--cpus-per-task` should work and the name of the env variable was renamed in SLURM to `SLURM_CPUS_PER_TASK`.
https://slurm.schedmd.com/sbatch.html#OPT_SLURM_CPUS_PER_TASK

Thanks to NERSC engineers for reporting this update!
…arpX#5394)

The fix introduced in BLAST-WarpX#5308 was not correct for Azure pipelines.

In GitHub Actions we trigger a run on the `push` event only for the
`development` branch.

The Azure equivalent of that is triggering a run on the `trigger` event
only for the `development` branch. However, since the `trigger` event
was completely absent from the Azure pipeline file (that is, the default
setup was being used), I had erroneously added the filter branch to the
`pr` event instead, unlike what I did for GitHub actions where the
`push` was exposed in the YAML files.

This was originally aimed at avoiding duplicate runs for "individual CI"
when `pre-commit` opens a pull request by pushing to a secondary branch
`pre-commit-ci-update-config` in the main repo (instead of a fork).

The new setup is tested in BLAST-WarpX#5393, where I copied these changes and where
one can see that a commit pushed to that PR does not trigger an
"individual CI" Azure pipeline anymore, but only a "PR automated" one.

Hopefully this is correct for the merge commits that get pushed to
`development` once a PR is closed, but we'll be able to test this only
after merging a PR.
<!--pre-commit.ci start-->
updates:
- [github.com/mgedmin/check-manifest: 0.49 →
0.50](mgedmin/check-manifest@0.49...0.50)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
As suggested by @WeiqunZhang:
We should move `CXXFLAGS: "-Werror -Wno-error=pass-failed"` to when
WarpX builds. It is picked up by `pip`. It didn't fail before probably
because there was cached version of Python stuff. Now there is probably
a new version of something that requires rebuilding some packages.
- Weekly update to latest AMReX:
```console
./Tools/Release/updateAMReX.py
```
- Weekly update to latest pyAMReX:
```console
./Tools/Release/updatepyAMReX.py
```
- Weekly update to latest PICSAR (no changes):
```console
./Tools/Release/updatePICSAR.py
```
<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.6.9 →
v0.7.0](astral-sh/ruff-pre-commit@v0.6.9...v0.7.0)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The new version of picmistandard is compatible with NumPy version 2.
This PR adds time-averaged field diagnostics to the WarpX output.

To-do:
- [x] code
- [x] docs
- [x] tests
- [x] example

Follow-up PRs:
- meta-data
- make compatible with adaptive time stepping

This PR is based on work performed during the *2024 WarpX Refactoring
Hackathon* and was created together with @RevathiJambunathan.

Successfully merging this pull request may close BLAST-WarpX#5165.

---------

Co-authored-by: RevathiJambunathan <revanathan@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <ezoni@lbl.gov>
This PR adds details in the beam-beam collision example about how to
generate the QED lookup tables.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The CI checks `Intel / oneAPI ICX SP` and `Intel / oneAPI DPC++ SP` are
failing since a few days.

This is likely due to the fact that the GitHub Actions runner is now
installing IntelLLVM 2025.0.0 instead of IntelLLVM 2024.2.1, as until a
few days ago.

This causes the following issue when building openPMD:
```console
/home/runner/work/WarpX/WarpX/build_sp/_deps/fetchedopenpmd-src/include/openPMD/backend/Container.hpp:263:32: error: no member named 'm_container' in 'Container<T, T_key, T_container>'
  263 |         container().swap(other.m_container);
      |                          ~~~~~ ^
1 error generated.
```

We can try to install the previous version of IntelLLVM manually and see
if that fixes the issue.
…X#5421)

Our `CMakeLists` to set up the `ctest` executable had a logic error when
`WarpX_APP=OFF` and `WarpX_PYTHON=ON`, in that it was trying to install
executable tests without an executable application.

The error message looked something like
```console
  Error evaluating generator expression:
    $<TARGET_FILE:app_3d>
  No target "app_3d"
```
…arpX#5423)

This PR updates the instructions to compile WarpX on the Adastra
supercomputer (CINES, France)
<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.7.0 →
v0.7.1](astral-sh/ruff-pre-commit@v0.7.0...v0.7.1)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Weekly update to latest AMReX:
```console
./Tools/Release/updateAMReX.py
```
- Weekly update to latest pyAMReX:
```console
./Tools/Release/updatepyAMReX.py
```
- Weekly update to latest PICSAR (no changes):
```console
./Tools/Release/updatePICSAR.py
```
…#5395)

This adds the option to inject particles from the embedded boundary with
PICMI.
There was a bug where WarpX would only read `flux_tmin`, `flux_tmax` for
the injection from a plane, but not for the injection from the EB.

This PR fixes the bug, and uses `tmin`/`tmax` in the CI test for the EB
injection.
…5389)

In the rigid injection, the fields where scale by the fraction of time
spent between `n*dt` and `(n+1)*dt` to the right of the injection plane.
However, to be consistent with the leap-frog velocity update, this needs
to be between `(n-1/2)*dt` and `(n+1/2)*dt` instead.

As a side-effect of this PR, saving and re-setting `u` and
`optical_depth` to their original value is not needed anymore since the
scaling factor for E and B is 0 for particles to the left of the plane.
Create issue templates for:
- [x] bugs
- [x] installation issues
- [x] feature requests
- [x] blank
- [ ] usage question -> link to
[Discussions](https://github.com/ECP-WarpX/WarpX/discussions)

---------

Co-authored-by: Edoardo Zoni <ezoni@lbl.gov>
Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com>
Warn users that use the old `warpx.multifab("internal_name")` overload to use the new one that only requests a prefix, with `dir` and `level` as extra arguments. Follow-up to BLAST-WarpX#5321.
…T-WarpX#5286)

Implemented a preconditioner for the implicit E-field solve using the
AMReX curl-curl operator and the MLMG solver.
+ Introduced a `Preconditioner` base class that defines the action of a
preconditioner for the JFNK algorithm.
+ Implemented the `CurlCurlMLMGPC` that uses the multigrid solution for
the curl-curl operator (implemented in `AMReX`) to precondition the
E-field JFNK solve.

Other changes needed for this:
+ Partially implemented a mapping between WarpX field boundary types and
AMReX's linear operator boundary types.
+ Added some functionalities to `ImplicitSolver` class that allows
preconditioners to access `WarpX` info (like `Geometry`, boundaries,
etc).

Some premilinary wall times for:
```
Test: inputs_vandb_2d
  Grid: 160 X 160
  dt: 0.125/wpe = 2.22e-18 (dt_CFL = 7.84e-19 s, CFL = 2.83)
  Time iterations: 20

Solver parameters:
  newton.max_iterations = 10
  newton.relative_tolerance = 1.0e-12
  newton.absolute_tolerance = 0.0
  gmres.max_iterations = 1000
  gmres.relative_tolerance = 1.0e-8
  gmres.absolute_tolerance = 0.0

Avg GMRES iterations: ~3 (wPC), ~27 (noPC)
```

with `32^2` particles per cell:
```
Lassen (MPI + CUDA)
-------------------
  Box  GPU   Walltime (s)
             wPC       noPC
   1    1    2324.7    15004.1
   4    1    2306.8    14356.8
   4    4     758.9     3647.3

Dane (MPI + OMP)
----------------
  Box  CPU  Threads   Walltime (s)
                      wPC      noPC
   1    1      1      6709.3   43200.0*
   1    1      2      3279.1   22296.1
   1    1      4      1696.3   11613.2
   1    1      8      1085.0    6911.4
   1    1     16       724.3    4729.0
   4    1      1      5525.9   33288.8
  16    1      1      4419.4   28467.8
   4    4      1      1324.4    9121.1
  16   16      1       524.9    3658.8

* 43200.0 seconds is 12 hours (max job duration on Dane);
the simulation was almost done (started the 20th step).
```

with `10^2` particles per cell:
```
Lassen (MPI + CUDA)
-------------------
  Box  GPU   Walltime (s)
             wPC       noPC
   1    1    365.0     1443.5 
   4    1    254.1      927.8 
   4    4    133.1      301.5 

Dane (MPI + OMP)
----------------
  Box  CPU  Threads   Walltime (s)
                      wPC      noPC
   1    1      1      440.8    2360.5     
   1    1      2      241.7    1175.8 
   1    1      4      129.3     727.0 
   1    1      8       94.2     407.5 
   1    1     16       74.3     245.6 
   4    1      1      393.3    1932.5 
  16    1      1      337.6    1618.7 
   4    4      1       92.2     479.1 
  16   16      1       58.1     192.6 
```

---------

Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com>
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: Justin Angus <angus1@llnl.gov>
Co-authored-by: Weiqun Zhang <WeiqunZhang@lbl.gov>
I think this should work to find and print backtraces after all CI tests
have run.

This is for Azure only. Local backtraces still need be inspected
manually.

After trying many solutions, the current strategy is:
- avoid removing the backtrace files in the `cleanup` step of each test
(we continue to remove all other files);
- have a separate workflow step to find and print the backtrace files in
the Azure job (this is executed always).

The new Azure workflow step is labeled "Logs" and it comes right after
the step labeled "Test".
)

The `warpx` prefix was left off of this argument.

This addresses issues raised in BLAST-WarpX#5431 and BLAST-WarpX#5432
ax3l and others added 30 commits February 26, 2025 06:22
Use the latest ADIOS2 release (v2.10.2) on all HPC machines.
…pX#5710)

While working on another PR I uncovered what I think to be a bug where
`m_get_externalEB` is called two times. This PR fixes this bug

---------

Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
BLAST-WarpX#5711)

This adds a function that extracts the particles that were scraped at
the current timestep.

This is useful in callback functions, where we often want to re-inject
particles that hit the boundary, and therefore need to select the ones
that were scraped at the current timestep.

This also avoids calling `clear_buffer`, which potentially interferes
with the `BoundaryScrapingDiagnostic`

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Pure functions like `linear_interp`, `bilinear_interp`, and
`trilinear_interp` are very general. Therefore, we can consider to move
them to `ablastr::math`. Besides, I will need some of these functions to
move `picsar_qed` inside `ablastr`
(BLAST-WarpX#5677)
<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.9.6 →
v0.9.7](astral-sh/ruff-pre-commit@v0.9.6...v0.9.7)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- [x] Update HIP CI workflow to run on Ubuntu 24.04
- [x] Add HIP version to CLI, default 6.3.2
- [x] Fix bug caused by using `std::optional` on device

---------

Co-authored-by: Edoardo Zoni <ezoni@lbl.gov>
Co-authored-by: David Grote <grote1@llnl.gov>
To enable it, put it in the openPMD option through input file:
e.g. 
```
diag1.openpmd_backend = bp5
diag1.adios2_engine.parameters.FlattenSteps = on
```

This feature is useful for BTD use case. Data can be flushed after each
buffered writes of a snapshot

To check weather this feature is in use, try "bpls -V your_bp_file_name"

Also adds a fix as in openPMD/openPMD-api#1655
for BP5 with file-based encoding, i.e., when some ranks have no
particles.

---------

Co-authored-by: Junmin Gu <junmin@login05.frontier.olcf.ornl.gov>
Co-authored-by: Luca Fedeli <luca.fedeli.88@gmail.com>
Co-authored-by: Junmin Gu <junmin@login04.frontier.olcf.ornl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.9.7 →
v0.9.9](astral-sh/ruff-pre-commit@v0.9.7...v0.9.9)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This PR introduces a prototype for adding the `clang-format` hook to
`pre-commit`.

To disable formatting, e.g., on blocks of mathematical formulas, use:
```C++
// clang-format off
amrex::Real x = my     * custom
              + pretty * alignment;
// clang-format on
```

Currently, the hook is applied only to the `Source/main.cpp` file to
demonstrate its functionality.

If this approach is deemed useful, we can gradually extend it to all C++
files in our codebase, one PR at a time. _We could make this into a
GitHub "project" to easily keep track of the progress._ If not, please
feel free to close the PR without merging.

The `.clang-format` configuration file has been generated based on the
`LLVM` style using the command
```bash
clang-format -style=llvm -dump-config > .clang-format
```
and has been modified in the following ways:

-
    ```
    AlwaysBreakAfterDefinitionReturnType: All  # instead of None
    ```
- 
    ```
    IndentWidth: 4  # instead of 2
    ```
-
    ```
    PointerAlignment: Left  # instead of Right
    ```
-
    ```
    SpaceBeforeParens: Custom  # instead of ControlStatements
    SpaceBeforeParensOptions:
      ...
      AfterFunctionDefinitionName: true  # instead of false
      AfterFunctionDeclarationName: true  # instead of false
      ...
    ```

A different base style could be chosen and/or further customization
could be done in future PRs as needed, when the formatting is applied to
more code.
…LAST-WarpX#5700)

The Villasenor and Buneman current deposition was moved to a separate
kernel routine and added implicit and explicit callers.

A test case is a 2D uniform plasma, run for a number of plasma periods.
The images below show the relative change in the energgy for the parts
of the system. The total energy is conversed to roughly 1.e-5. The
results are shown for both the Villasenor and the Esirkepov current
deposition. The results are essentially the same, with the differences
in the two cases below the resolution of the images.

![energy_explicit_VandB](https://github.com/user-attachments/assets/104ae3e8-608f-48ac-952b-bd3029e3c0e6)

![energy_explicit_Esirkepov](https://github.com/user-attachments/assets/793fd495-feff-42dd-85df-b6b7acc934ea)
This PR allows the synchronization in time of the particle velocities
and positions when generating diagnostics. Without this option, the
particle velocities will lag behind the position by a half time step.
This adds the boolean input parameter
`warpx.synchronize_velocity_for_diagnostics` to turn on this option,
defaulting to false.

There are several pieces to this PR:
- Changes to `MultiDiagnostic` and `MultiReducedDiags` adding routines
to check if any diagnostics will be done
- Adds a call to `PushP` to just before the diagnostics are done (to get
the updated fields from the electrostatic calculation)
- Add the appropriate documentation

What `Evolve` does is if the synchronization is to be done, advance the
velocity a half step just before the diagnostics and sets
`is_synchronized=true`. Then at the start of the next step, if
`is_synchronized` is true, push the velocities back a half step to be
ready for the full leap frog advance.

Comments:
- Is the documentation in the correct place in parameters.rst?
- The reduced diagnostics could perhaps use the new DoDiags method
instead of accessing `m_intervals` in its ComputeDiags.
- This PR leaves the original PushP unchanged, even though it is
possibly buggy. That PushP fetches the fields, but uses the particle
positions before the particle boundary conditions have been applied,
leading to a possible out of bounds reference. Also, that PushP may not
be consistent with the backwards PushP since the fields may had changed.
Comments are added to the code to note this potential problem. I avoided
changing this since it breaks many CI tests.

---------

Co-authored-by: Edoardo Zoni <ezoni@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com>
This PR extends the work already done by @RemiLehe and @oshapoval in
BLAST-WarpX#5524.

Some changes were made to how the ionization process is initialized, for
example, the user must now specify the `target_species` (i.e., the
species which undergoes ionization) as well as the `product_species`.
The product species can include the colliding species (for example
electron + neutral -> 2 x electron + ion), but does not have to (for
example H$^+$ + D -> H$^+$ + D$^+$ + electron).

The test created by @archermarx is now passing (at least early on):

![image](https://github.com/user-attachments/assets/344476b8-bc63-4395-92c1-d795183048b9)

Todo:

- [x] test implementation
- [x] fix scattering process to conserve momentum
- [x] clean up code

---------

Signed-off-by: roelof-groenewald <regroenewald@gmail.com>
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: Olga Shapoval <oshapoval@lbl.gov>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…puting Foundation" --> "High Performance Software Foundation" ) (BLAST-WarpX#5723)

This PR corrects a typo in the Governance section of our readme file!
Add `Gpu::streamSynchronize();` after copying `TableData` from either
host-to-device and device-to-host in the `ParticleHistogram2D` and
`DifferentialLuminosity2D` reduced diagnostics.

We have observed unstable behaviors with the `DifferentialLuminosity2D`
diagnostic and @RemiLehe suggested this could be a reason.

Marked as bug, unless otherwise recommended.

---------

Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Add support to silence the status print or to skip the creation of
`write_used_inputs_file` altogether. Useful in unsupervised runs, like
many optimizations, where often we want to skip file creation
altogether, e.g., if run with Python.
…ndation" --> "High Performance Software Foundation" ) (BLAST-WarpX#5724)

This PR corrects the typo pointed out in BLAST-WarpX#5723 in the documentation
Disallow BP5 with group based encoding, because it creates files that
cannot be read back efficiently.

What works: BP4 f, BP4 g, BP5 f, BP5 v (still experimental, not yet
fully supported in readers), H5 f, H5 g.

References:
- BLAST-ImpactX/impactx#870
- openPMD/openPMD-api#1724
-
openPMD/openPMD-api#1457 (comment)

cc @franzpoeschel
The pip instructions for setuptools require now to specify the `[core]`
argument, otherwise dependent packages are used from the system and are
usually outdated, which causes errors.

X-ref:
- https://setuptools.pypa.io/en/latest/userguide/quickstart.html
- pypa/setuptools#4483 (comment)
…LAST-WarpX#5684)

`PSATDCurrentCorrection` and `PSATDVayDeposition` are member functions
of the WarpX class, but they are used only in `WarpXPushFieldsEM.cpp`
and they can be easily turned into pure functions. Therefore, this PR
moves them inside an anonymous namespace in `WarpXPushFieldsEM.cpp` .
The goal is the simplification of the WarpX class.
Prepare the March release of WarpX, following the
[documentation](https://warpx.readthedocs.io/en/latest/maintenance/release.html):
1. Update to latest AMReX release:
```console
./Tools/Release/updateAMReX.py
```
2. Update to latest pyAMReX release:
```console
./Tools/Release/updatepyAMReX.py
```
3. Update to latest PICSAR release (no changes, still 25.01):
```console
./Tools/Release/updatePICSAR.py
``` 
4. Update WarpX version number:
```console
./Tools/Release/newVersion.sh
```
…WarpXComm.cpp (BLAST-WarpX#5672)

`UpdateCurrentNodalToStag` is a member function of the WarpX class, but
it is used only inside `WarpXComm.cpp` and it is defined there.
Therefore, this PR turns it into a pure function and moves it inside an
anonymous namespace in `WarpXComm.cpp` . This simplifies the interface
of the WarpX class.
…e in WarpX.cpp (BLAST-WarpX#5666)

`AllocateCenteringCoefficients` is a pure function used only inside
`WarpX.cpp`. This PR moves it from a private member function of the
WarpX class to an anonymous namespace inside `WarpX.cpp`. The PR also
makes the function more compact.
This is done to simplify the `WarpX.H` header.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.