Skip to content

Commit 66fe115

Browse files
gujJunmin Gulucafedeli88Junmin Guax3l
authored
Added support to use adios2's flatten_step (#5634)
To enable it, put it in the openPMD option through input file: e.g. ``` diag1.openpmd_backend = bp5 diag1.adios2_engine.parameters.FlattenSteps = on ``` This feature is useful for BTD use case. Data can be flushed after each buffered writes of a snapshot To check weather this feature is in use, try "bpls -V your_bp_file_name" Also adds a fix as in openPMD/openPMD-api#1655 for BP5 with file-based encoding, i.e., when some ranks have no particles. --------- Co-authored-by: Junmin Gu <junmin@login05.frontier.olcf.ornl.gov> Co-authored-by: Luca Fedeli <luca.fedeli.88@gmail.com> Co-authored-by: Junmin Gu <junmin@login04.frontier.olcf.ornl.gov> Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
1 parent ba2c3c7 commit 66fe115

File tree

3 files changed

+64
-14
lines changed

3 files changed

+64
-14
lines changed

Docs/source/usage/parameters.rst

+16-9
Original file line numberDiff line numberDiff line change
@@ -2796,7 +2796,7 @@ In-situ capabilities can be used by turning on Sensei or Ascent (provided they a
27962796
Only read if ``<diag_name>.format = sensei``.
27972797
When 1 lower left corner of the mesh is pinned to 0.,0.,0.
27982798

2799-
* ``<diag_name>.openpmd_backend`` (``bp``, ``h5`` or ``json``) optional, only used if ``<diag_name>.format = openpmd``
2799+
* ``<diag_name>.openpmd_backend`` (``bp5``, ``bp4``, ``h5`` or ``json``) optional, only used if ``<diag_name>.format = openpmd``
28002800
`I/O backend <https://openpmd-api.readthedocs.io/en/latest/backends/overview.html>`_ for `openPMD <https://www.openPMD.org>`_ data dumps.
28012801
``bp`` is the `ADIOS I/O library <https://csmd.ornl.gov/adios>`_, ``h5`` is the `HDF5 format <https://www.hdfgroup.org/solutions/hdf5/>`_, and ``json`` is a `simple text format <https://en.wikipedia.org/wiki/JSON>`_.
28022802
``json`` only works with serial/single-rank jobs.
@@ -2818,19 +2818,26 @@ In-situ capabilities can be used by turning on Sensei or Ascent (provided they a
28182818

28192819
.. code-block:: text
28202820
2821-
<diag_name>.adios2_operator.type = blosc
2822-
<diag_name>.adios2_operator.parameters.compressor = zstd
2823-
<diag_name>.adios2_operator.parameters.clevel = 1
2824-
<diag_name>.adios2_operator.parameters.doshuffle = BLOSC_BITSHUFFLE
2825-
<diag_name>.adios2_operator.parameters.threshold = 2048
2826-
<diag_name>.adios2_operator.parameters.nthreads = 6 # per MPI rank (and thus per GPU)
2821+
<diag_name>.adios2_operator.type = blosc
2822+
<diag_name>.adios2_operator.parameters.compressor = zstd
2823+
<diag_name>.adios2_operator.parameters.clevel = 1
2824+
<diag_name>.adios2_operator.parameters.doshuffle = BLOSC_BITSHUFFLE
2825+
<diag_name>.adios2_operator.parameters.threshold = 2048
2826+
<diag_name>.adios2_operator.parameters.nthreads = 6 # per MPI rank (and thus per GPU)
28272827
28282828
or for the lossy ZFP compressor using very strong compression per scalar:
28292829

28302830
.. code-block:: text
28312831
2832-
<diag_name>.adios2_operator.type = zfp
2833-
<diag_name>.adios2_operator.parameters.precision = 3
2832+
<diag_name>.adios2_operator.type = zfp
2833+
<diag_name>.adios2_operator.parameters.precision = 3
2834+
2835+
For back-transformed diagnostics with ADIOS BP5, we are experimenting with a new option for variable-based encoding that "flattens" the output steps, aiming to increase write and read performance:
2836+
2837+
.. code-block:: text
2838+
2839+
<diag_name>.openpmd_backend = bp5
2840+
<diag_name>.adios2_engine.parameters.FlattenSteps = on
28342841
28352842
* ``<diag_name>.adios2_engine.type`` (``bp4``, ``sst``, ``ssc``, ``dataman``) optional,
28362843
`ADIOS2 Engine type <https://openpmd-api.readthedocs.io/en/0.16.1/details/backendconfig.html#adios2>`__ for `openPMD <https://www.openPMD.org>`_ data dumps.

Source/Diagnostics/WarpXOpenPMD.H

+13
Original file line numberDiff line numberDiff line change
@@ -176,6 +176,19 @@ private:
176176
}
177177
}
178178

179+
/** Flushing out data of the current openPMD iteration
180+
*
181+
* @param[in] isBTD if the current diagnostic is BTD
182+
*
183+
* if isBTD=false, apply the default flush behaviour
184+
* if isBTD=true, advice to use ADIOS Put() instead of PDW for better performance.
185+
*
186+
* iteration.seriesFlush() is used instead of series.flush()
187+
* because the latter flushes only if data is dirty
188+
* this causes trouble when the underlying writing function is collective (like PDW)
189+
*
190+
*/
191+
void flushCurrent (bool isBTD) const;
179192

180193
/** This function does initial setup for the fields when interation is newly created
181194
* @param[in] meshes The meshes in a series

Source/Diagnostics/WarpXOpenPMD.cpp

+35-5
Original file line numberDiff line numberDiff line change
@@ -401,6 +401,24 @@ WarpXOpenPMDPlot::~WarpXOpenPMDPlot ()
401401
}
402402
}
403403

404+
void WarpXOpenPMDPlot::flushCurrent (bool isBTD) const
405+
{
406+
WARPX_PROFILE("WarpXOpenPMDPlot::flushCurrent");
407+
408+
auto hasOption = m_OpenPMDoptions.find("FlattenSteps");
409+
const bool flattenSteps = isBTD && (m_Series->backend() == "ADIOS2") && (hasOption != std::string::npos);
410+
411+
openPMD::Iteration currIteration = GetIteration(m_CurrentStep, isBTD);
412+
if (flattenSteps) {
413+
// delayed until all fields and particles are registered for flush
414+
// and dumped once via flattenSteps
415+
currIteration.seriesFlush( "adios2.engine.preferred_flush_target = \"buffer\"" );
416+
}
417+
else {
418+
currIteration.seriesFlush();
419+
}
420+
}
421+
404422
std::string
405423
WarpXOpenPMDPlot::GetFileName (std::string& filepath)
406424
{
@@ -531,7 +549,6 @@ WarpXOpenPMDPlot::WriteOpenPMDParticles (const amrex::Vector<ParticleDiag>& part
531549
WARPX_PROFILE("WarpXOpenPMDPlot::WriteOpenPMDParticles()");
532550

533551
for (const auto & particle_diag : particle_diags) {
534-
535552
WarpXParticleContainer* pc = particle_diag.getParticleContainer();
536553
PinnedMemoryParticleContainer* pinned_pc = particle_diag.getPinnedParticleContainer();
537554
if (isBTD || use_pinned_pc) {
@@ -649,6 +666,17 @@ for (const auto & particle_diag : particle_diags) {
649666
pc->getCharge(), pc->getMass(),
650667
isBTD, isLastBTDFlush);
651668
}
669+
670+
auto hasOption = m_OpenPMDoptions.find("FlattenSteps");
671+
const bool flattenSteps = isBTD && (m_Series->backend() == "ADIOS2") && (hasOption != std::string::npos);
672+
673+
if (flattenSteps)
674+
{
675+
// forcing new step so data from each btd batch in
676+
// preferred_flush_target="buffer" can be flushed out
677+
openPMD::Iteration currIteration = GetIteration(m_CurrentStep, isBTD);
678+
currIteration.seriesFlush(R"(adios2.engine.preferred_flush_target = "new_step")");
679+
}
652680
}
653681

654682
void
@@ -665,6 +693,7 @@ WarpXOpenPMDPlot::DumpToFile (ParticleContainer* pc,
665693
const bool isLastBTDFlush
666694
)
667695
{
696+
WARPX_PROFILE("WarpXOpenPMDPlot::DumpToFile()");
668697
WARPX_ALWAYS_ASSERT_WITH_MESSAGE(m_Series != nullptr, "openPMD: series must be initialized");
669698

670699
AMREX_ALWAYS_ASSERT(write_real_comp.size() == pc->NumRealComps());
@@ -723,8 +752,7 @@ WarpXOpenPMDPlot::DumpToFile (ParticleContainer* pc,
723752
SetConstParticleRecordsEDPIC(currSpecies, positionComponents, NewParticleVectorSize, charge, mass);
724753
}
725754

726-
// open files from all processors, in case some will not contribute below
727-
m_Series->flush();
755+
flushCurrent(isBTD);
728756

729757
// dump individual particles
730758
bool contributed_particles = false; // did the local MPI rank contribute particles?
@@ -765,6 +793,7 @@ WarpXOpenPMDPlot::DumpToFile (ParticleContainer* pc,
765793
// BP4 (ADIOS 2.8): last MPI rank's `Put` meta-data wins
766794
// BP5 (ADIOS 2.8): everyone has to write an empty block
767795
if (is_resizing_flush && !contributed_particles && isBTD && m_Series->backend() == "ADIOS2") {
796+
WARPX_PROFILE("WarpXOpenPMDPlot::ResizeInADIOS()");
768797
for( auto & [record_name, record] : currSpecies ) {
769798
for( auto & [comp_name, comp] : record ) {
770799
if (comp.constant()) { continue; }
@@ -804,7 +833,7 @@ WarpXOpenPMDPlot::DumpToFile (ParticleContainer* pc,
804833
}
805834
}
806835

807-
m_Series->flush();
836+
flushCurrent(isBTD);
808837
}
809838

810839
void
@@ -1476,7 +1505,7 @@ WarpXOpenPMDPlot::WriteOpenPMDFieldsAll ( //const std::string& filename,
14761505
amrex::Gpu::streamSynchronize();
14771506
#endif
14781507
// Flush data to disk after looping over all components
1479-
m_Series->flush();
1508+
flushCurrent(isBTD);
14801509
} // levels loop (i)
14811510
}
14821511
#endif // WARPX_USE_OPENPMD
@@ -1490,6 +1519,7 @@ WarpXParticleCounter::WarpXParticleCounter (ParticleContainer* pc):
14901519
m_MPIRank{amrex::ParallelDescriptor::MyProc()},
14911520
m_MPISize{amrex::ParallelDescriptor::NProcs()}
14921521
{
1522+
WARPX_PROFILE("WarpXOpenPMDPlot::ParticleCounter()");
14931523
m_ParticleCounterByLevel.resize(pc->finestLevel()+1);
14941524
m_ParticleOffsetAtRank.resize(pc->finestLevel()+1);
14951525
m_ParticleSizeAtRank.resize(pc->finestLevel()+1);

0 commit comments

Comments
 (0)