|
| 1 | +.. _visualization-catalyst: |
| 2 | + |
| 3 | +In situ Visualization with Catalyst 2 |
| 4 | +===================================== |
| 5 | +Catalyst 2 (further referred to as just Catalyst) is a lightweight in-situ visualization and analysis framework API developed for simulations and other scientific data producers. It has a lightweight implementation |
| 6 | +(or **stub**) and an SDK to develop custom implementations of Catalyst. ParaView comes with its own implementation (known as **ParaView Catalyst**) for leveraging ParaView's |
| 7 | +visualization and analysis capabilities, which is what this document will focus on. |
| 8 | + |
| 9 | + |
| 10 | +Enabling Catalyst |
| 11 | +----------------- |
| 12 | +In order to use Catalyst with WarpX, you must `build Catalyst 2 <https://catalyst-in-situ.readthedocs.io/en/latest/build_and_install.html>`_ and `build <https://github.com/Kitware/ParaView/blob/master/Documentation/dev/build.md>`__ or `install <https://www.paraview.org/download/>`__ ParaView 5.9+. Afterward, AMReX must be built with ``AMReX_CONDUIT=TRUE``, |
| 13 | +``AMReX_CATALYST=TRUE``, ``Conduit_DIR=/path/to/conduit``, and ``Catalyst_DIR=/path/to/catalyst`` (``/path/to/catalyst`` should be the directory containing ``catalyst-config.cmake``, not the path to the implementation). |
| 14 | + |
| 15 | +Once AMReX is appropriately built, WarpX can be built with the following options: |
| 16 | + |
| 17 | +.. code-block:: cmake |
| 18 | +
|
| 19 | + WarpX_amrex_internal=FALSE |
| 20 | + AMReX_DIR="/path/to/amrex/build" |
| 21 | +
|
| 22 | +If they cannot be found, ``Conduit_DIR`` and ``Catalyst_DIR`` will have to be set again. Ensure that AMReX is built with all required options, some common ones being: |
| 23 | + |
| 24 | +.. code-block:: cmake |
| 25 | +
|
| 26 | + AMReX_MPI=TRUE |
| 27 | + AMReX_MPI_THREAD_MULTIPLE=TRUE |
| 28 | + AMReX_LINEAR_SOLVERS=TRUE |
| 29 | + AMReX_PARTICLES=TRUE |
| 30 | + AMReX_PARTICLES_PRECISION=DOUBLE |
| 31 | + AMReX_PIC=TRUE |
| 32 | + AMReX_TINY_PROFILE=TRUE |
| 33 | +
|
| 34 | +
|
| 35 | +Inputs File Configuration |
| 36 | +------------------------- |
| 37 | +Once WarpX has been compiled with Catalyst support, it will need to be enabled and configured at runtime. |
| 38 | +This is done using our usual inputs file (read with ``amrex::ParmParse``). |
| 39 | +The supported parameters are part of the :ref:`FullDiagnostics <running-cpp-parameters-diagnostics>` with ``<diag_name>.format`` parameter set to ``catalyst``. |
| 40 | + |
| 41 | +In addition to configuring the diagnostics, the following parameters must be included: |
| 42 | + * ``catalyst.script_paths``: The locations of the pipeline scripts, separated by either a colon or semicolon (e.g. ``/path/to/script1.py;/path/to/script2.py``). |
| 43 | + * ``catalyst.implementation`` (default ``paraview``): The name of the implementation being used (case sensitive). |
| 44 | + * ``catalyst.implementation_search_paths``: The locations to search for the given implementation. The specific file being searched for will be ``catalyst_{implementation}.so``. |
| 45 | + |
| 46 | +Because the scripts and implementations are global, Catalyst does not benefit from nor differentiate between multiple diagnostics. |
| 47 | + |
| 48 | + |
| 49 | +Visualization/Analysis Pipeline Configuration |
| 50 | +--------------------------------------------- |
| 51 | +Catalyst uses the files specified in ``catalyst.script_paths`` to run all analysis. |
| 52 | + |
| 53 | +The following script, :code:`simple_catalyst_pipeline.py`, automatically detects the type of data for both the mesh and particles, then creates an extractor for them. In most |
| 54 | +cases, these will be saved as ``.VTPC`` files which can be read with the ``XML Partitioned Dataset Collection Reader``. |
| 55 | + |
| 56 | +.. code-block:: python |
| 57 | +
|
| 58 | + from paraview.simple import * |
| 59 | + from paraview import catalyst |
| 60 | +
|
| 61 | + # Helper function |
| 62 | + def create_extractor(data_node, filename="Dataset"): |
| 63 | + VTK_TYPES = ["vtkImageData", "vtkRectilinearGrid", "vtkStructuredGrid", "vtkPolyData", "vtkUnstructuredGrid", "vtkUniformGridAMR", "vtkMultiBlockDataSet", "vtkPartitionedDataSet", "vtkPartitionedDataSetCollection", "vtkHyperTreeGrid"] |
| 64 | + FILE_ASSOCIATIONS = ["VTI", "VTR", "VTS", "VTP", "VTU", "VTH", "VTM", "VTPD", "VTPC", "HTG"] |
| 65 | + clientside_data = data_node.GetClientSideObject().GetOutputDataObject(0) # Gets the dataobject from the default output port |
| 66 | +
|
| 67 | + # Loop is required because .IsA() detects valid classes that inherit from the VTK_TYPES |
| 68 | + for i, vtk_type in enumerate(VTK_TYPES): |
| 69 | + if (clientside_data.IsA(vtk_type)): |
| 70 | + filetype = FILE_ASSOCIATIONS[i] |
| 71 | + extractor = CreateExtractor(filetype, data_node, registrationName=f"_{filetype}") |
| 72 | + extractor.Writer.FileName = filename + "_{timestep:}" + f".{filetype}" |
| 73 | + return extractor |
| 74 | +
|
| 75 | + raise RuntimeError(f"Unsupported data type: {clientside_data.GetClassName()}") |
| 76 | +
|
| 77 | + # Camera settings |
| 78 | + paraview.simple._DisableFirstRenderCameraReset() # Prevents the camera from being shown |
| 79 | +
|
| 80 | + # Options |
| 81 | + options = catalyst.Options() |
| 82 | +
|
| 83 | + options.CatalystLiveTrigger = "TimeStep" # "Python", "TimeStep", "TimeValue" |
| 84 | + options.EnableCatalystLive = 0 # 0 (disabled), 1 (enabled) |
| 85 | + if (options.EnableCatalystLive == 1): |
| 86 | + options.CatalystLiveURL = "localhost:22222" # localhost:22222 is default |
| 87 | +
|
| 88 | + options.ExtractsOutputDirectory = "datasets" # Base for where all files are saved |
| 89 | + options.GenerateCinemaSpecification = 0 # 0 (disabled), 1 (enabled), generates additional descriptor files for cinema exports |
| 90 | + options.GlobalTrigger = "TimeStep" # "Python", "TimeStep", "TimeValue" |
| 91 | +
|
| 92 | + meshSource = PVTrivialProducer(registrationName="mesh") # "mesh" is the node where the mesh data is stored |
| 93 | + create_extractor(meshSource, filename="meshdata") |
| 94 | + particleSource = PVTrivialProducer(registrationName="particles") # "particles" is the node where particle data is stored |
| 95 | + create_extractor(particleSource, filename="particledata") |
| 96 | +
|
| 97 | + # Called on catalyst initialize (after Cxx side initialize) |
| 98 | + def catalyst_initialize(): |
| 99 | + return |
| 100 | +
|
| 101 | + # Called on catalyst execute (after Cxx side update) |
| 102 | + def catalyst_execute(info): |
| 103 | + print(f"Time: {info.time}, Timestep: {info.timestep}, Cycle: {info.cycle}") |
| 104 | + return |
| 105 | +
|
| 106 | + # Callback if global trigger is set to "Python" |
| 107 | + def is_activated(controller): |
| 108 | + return True |
| 109 | +
|
| 110 | + # Called on catalyst finalize (after Cxx side finalize) |
| 111 | + def catalyst_finalize(): |
| 112 | + return |
| 113 | +
|
| 114 | + if __name__ == '__main__': |
| 115 | + paraview.simple.SaveExtractsUsingCatalystOptions(options) |
| 116 | +
|
| 117 | +
|
| 118 | +For the case of ParaView Catalyst, pipelines are run with ParaView's included ``pvbatch`` executable and use the ``paraview`` library to modify the data. While pipeline scripts |
| 119 | +could be written manually, this is not advised for anything beyond the script above. It is much more practical to use ParaView's built in ``Save Catalyst State`` button. |
| 120 | + |
| 121 | +The process for creating a pipeline is as follows: |
| 122 | + 1. Run at least one step of simulation and save the data in a ParaView compatible format, then open it in ParaView. |
| 123 | + 2. Set up the desired scene, including filters, camera and views, and extractors. |
| 124 | + 3. Press ``Save Catalyst State``, or the multicolored flask icon in the top left corner, and save it to a desired location. |
| 125 | + 4. Open the script and replace the used producer with ``PVTrivialProducer``, setting the ``registrationName`` to either ``mesh`` or ``particles`` based on what data is used. |
| 126 | + |
| 127 | +As an example for step four, here are a few lines from a script directly exported from ParaView: |
| 128 | + |
| 129 | +.. code-block:: python |
| 130 | +
|
| 131 | + # create a new 'XML Image Data Reader' |
| 132 | + meshdatavti = XMLImageDataReader(registrationName='meshdata.vti', FileName=['/path/to/meshdata.vti']) |
| 133 | + meshdatavti.CellArrayStatus = ['Bx', 'By', 'Bz', 'Ex', 'Ey', 'Ez'] |
| 134 | + meshdatavti.TimeArray = 'None' |
| 135 | +
|
| 136 | + # Calculator sample filter |
| 137 | + calculator1 = Calculator(registrationName='Calculator1', Input=meshdatavti) |
| 138 | + calculator1.AttributeType = 'Cell Data' |
| 139 | + calculator1.ResultArrayName = 'BField' |
| 140 | + calculator1.Function = 'sqrt(Bx^2 + By^2 + Bz^2)' |
| 141 | +
|
| 142 | +In order to use it with the mesh data coming from the simulation, the above code would be changed to: |
| 143 | + |
| 144 | +.. code-block:: python |
| 145 | +
|
| 146 | + # create the producer |
| 147 | + meshdata = PVTrivialProducer(registrationName='mesh') |
| 148 | + meshdata.CellArrayStatus = ['Bx', 'By', 'Bz', 'Ex', 'Ey', 'Ez'] |
| 149 | + meshdata.TimeArray = 'None' |
| 150 | +
|
| 151 | + # Calculator sample filter |
| 152 | + calculator1 = Calculator(registrationName='Calculator1', Input=meshdata) |
| 153 | + calculator1.AttributeType = 'Cell Data' |
| 154 | + calculator1.ResultArrayName = 'BField' |
| 155 | + calculator1.Function = 'sqrt(Bx^2 + By^2 + Bz^2)' |
| 156 | +
|
| 157 | +Steps one is advised so that proper scaling and framing can be done, however in certain cases it may not be possible. If this is the case, a dummy object can be used instead |
| 158 | +(such as a wavelet or geometric shape scaled appropriately) and the rest of the steps can be followed as usual. |
| 159 | + |
| 160 | +Replay |
| 161 | +------ |
| 162 | + |
| 163 | +Catalyst 2 supports replay capabilities, which can be read about `here <https://catalyst-in-situ.readthedocs.io/en/latest/catalyst_replay.html>`_. |
| 164 | + |
| 165 | +.. note:: |
| 166 | + |
| 167 | + * TODO: Add more extensive documentation on replay |
0 commit comments