Skip to content

Commit 9cab6f9

Browse files
Wang-Zhongweipre-commit-ci[bot]ax3l
authored
Doc: Pitzer (OSC) (#5064)
* add: profile, dependencies, and sbatch scripts for OSC pitzer * add: pitzer documentation page * fix: add pitzer name to manual sidebar * chore: update pitzer dependencies installation script filename * fix formatting issues in doc * separate cpu and gpu scripts * add boost support * add separate instructions for CPU and GPU dependencies * rename venv name * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add header to dependencies intalling scripts * chores for doc * fix: dependencies installing scripts typo * fix typos * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * chore: Update batch script and profile file names for Pitzer CPU and V100 * remove gpu allocation in CPU profile * Update Docs/source/install/hpc/pitzer.rst Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Update Docs/source/install/hpc.rst Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Update Docs/source/install/hpc/pitzer.rst Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * use built-in cuda-aware MPI * Remove repetition of source profile --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
1 parent f10aa70 commit 9cab6f9

9 files changed

+752
-2
lines changed

.gitignore

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Python/pywarpx/libwarpx*.so
44
d/
55
f/
66
o/
7-
build/
7+
build*/
88
tmp_build_dir/
99
test_dir
1010
test_dir/

Docs/source/install/hpc.rst

+2-1
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ This section documents quick-start guides for a selection of supercomputers that
3636
hpc/crusher
3737
hpc/frontier
3838
hpc/fugaku
39+
hpc/greatlakes
3940
hpc/hpc3
4041
hpc/juwels
4142
hpc/karolina
@@ -46,12 +47,12 @@ This section documents quick-start guides for a selection of supercomputers that
4647
hpc/lxplus
4748
hpc/ookami
4849
hpc/perlmutter
50+
hpc/pitzer
4951
hpc/polaris
5052
hpc/quartz
5153
hpc/spock
5254
hpc/summit
5355
hpc/taurus
54-
hpc/greatlakes
5556

5657
.. tip::
5758

Docs/source/install/hpc/pitzer.rst

+274
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,274 @@
1+
.. _building-pitzer:
2+
3+
Pitzer (OSC)
4+
============
5+
6+
The `Pitzer cluster <https://www.osc.edu/supercomputing/computing/pitzer>`__ is located at the Ohio Supercomputer Center (OSC). It is currently the main CPU/GPU cluster at OSC. However, the `Cardinal cluster <https://www.osc.edu/resources/technical_support/supercomputers/cardinal>`__ is soon going to take over Pitzer to become the next major CPU/GPU cluster at OSC in the second half of 2024. A list of all OSC clusters can be found `here <https://www.osc.edu/services/cluster_computing>`__.
7+
8+
The Pitzer cluster offers a variety of partitions suitable for different computational needs, including GPU nodes, CPU nodes, and nodes with large memory capacities. For more information on the specifications and capabilities of these partitions, visit the `Ohio Supercomputer Center's Pitzer page <https://www.osc.edu/supercomputing/computing/pitzer>`__.
9+
10+
Introduction
11+
------------
12+
13+
If you are new to this system, **please see the following resources**:
14+
15+
* `Pitzer user guide <https://www.osc.edu/resources/getting_started/new_user_resource_guide>`__
16+
* Batch system: `Slurm <https://www.osc.edu/supercomputing/batch-processing-at-osc>`__
17+
* `Jupyter service <https://www.osc.edu/vocabulary/documentation/jupyter>`__
18+
* `Filesystems <https://www.osc.edu/supercomputing/storage-environment-at-osc/storage-hardware/overview_of_file_systems>`__:
19+
20+
* ``$HOME``: per-user directory, use only for inputs, source, and scripts; backed up (500GB)
21+
* ``/fs/ess``: per-project storage directory, use for long-term storage of data and analysis; backed up (1-5TB)
22+
* ``/fs/scratch``: per-project production directory; fast I/O for parallel jobs; not backed up (100TB)
23+
24+
.. _building-pitzer-preparation:
25+
26+
Preparation
27+
-----------
28+
29+
Use the following commands to download the WarpX source code:
30+
31+
.. code-block:: bash
32+
33+
git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx
34+
35+
On Pitzer, you can run either on GPU nodes with V100 GPUs or CPU nodes.
36+
37+
.. tab-set::
38+
39+
.. tab-item:: V100 GPUs
40+
41+
We use system software modules, add environment hints and further dependencies via the file ``$HOME/pitzer_v100_warpx.profile``.
42+
Create it now:
43+
44+
.. code-block:: bash
45+
46+
cp $HOME/src/warpx/Tools/machines/pitzer-osc/pitzer_v100_warpx.profile.example $HOME/pitzer_v100_warpx.profile
47+
48+
.. dropdown:: Script Details
49+
:color: light
50+
:icon: info
51+
:animate: fade-in-slide-down
52+
53+
.. literalinclude:: ../../../../Tools/machines/pitzer-osc/pitzer_v100_warpx.profile.example
54+
:language: bash
55+
56+
Edit the 2nd line of this script, which sets the ``export proj=""`` variable.
57+
For example, if you are member of the project ``pas2024``, then run ``nano $HOME/pitzer_v100_warpx.profile`` and edit line 2 to read:
58+
59+
.. code-block:: bash
60+
61+
export proj="pas2024"
62+
63+
Exit the ``nano`` editor with ``Ctrl`` + ``O`` (save) and then ``Ctrl`` + ``X`` (exit).
64+
65+
.. important::
66+
67+
Now, and as the first step on future logins to pitzer, activate these environment settings:
68+
69+
.. code-block:: bash
70+
71+
source $HOME/pitzer_v100_warpx.profile
72+
73+
Finally, since pitzer does not yet provide software modules for some of our dependencies, install them once:
74+
75+
.. code-block:: bash
76+
77+
bash $HOME/src/warpx/Tools/machines/pitzer-osc/install_v100_dependencies.sh
78+
79+
.. dropdown:: Script Details
80+
:color: light
81+
:icon: info
82+
:animate: fade-in-slide-down
83+
84+
.. literalinclude:: ../../../../Tools/machines/pitzer-osc/install_v100_dependencies.sh
85+
:language: bash
86+
87+
88+
.. tab-item:: CPU Nodes
89+
90+
We use system software modules, add environment hints and further dependencies via the file ``$HOME/pitzer_cpu_warpx.profile``.
91+
Create it now:
92+
93+
.. code-block:: bash
94+
95+
cp $HOME/src/warpx/Tools/machines/pitzer-osc/pitzer_cpu_warpx.profile.example $HOME/pitzer_cpu_warpx.profile
96+
97+
.. dropdown:: Script Details
98+
:color: light
99+
:icon: info
100+
:animate: fade-in-slide-down
101+
102+
.. literalinclude:: ../../../../Tools/machines/pitzer-osc/pitzer_cpu_warpx.profile.example
103+
:language: bash
104+
105+
Edit the 2nd line of this script, which sets the ``export proj=""`` variable.
106+
For example, if you are member of the project ``pas2024``, then run ``nano $HOME/pitzer_cpu_warpx.profile`` and edit line 2 to read:
107+
108+
.. code-block:: bash
109+
110+
export proj="pas2024"
111+
112+
Exit the ``nano`` editor with ``Ctrl`` + ``O`` (save) and then ``Ctrl`` + ``X`` (exit).
113+
114+
.. important::
115+
116+
Now, and as the first step on future logins to pitzer, activate these environment settings:
117+
118+
.. code-block:: bash
119+
120+
source $HOME/pitzer_cpu_warpx.profile
121+
122+
Finally, since pitzer does not yet provide software modules for some of our dependencies, install them once:
123+
124+
.. code-block:: bash
125+
126+
bash $HOME/src/warpx/Tools/machines/pitzer-osc/install_cpu_dependencies.sh
127+
128+
.. dropdown:: Script Details
129+
:color: light
130+
:icon: info
131+
:animate: fade-in-slide-down
132+
133+
.. literalinclude:: ../../../../Tools/machines/pitzer-osc/install_cpu_dependencies.sh
134+
:language: bash
135+
136+
137+
.. _building-pitzer-compilation:
138+
139+
Compilation
140+
-----------
141+
142+
Use the following :ref:`cmake commands <building-cmake>` to compile the application executable:
143+
144+
.. tab-set::
145+
.. tab-item:: V100 GPUs
146+
147+
.. code-block:: bash
148+
149+
cd $HOME/src/warpx
150+
rm -rf build_v100
151+
152+
cmake -S . -B build_v100 -DWarpX_COMPUTE=CUDA -DWarpX_FFT=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
153+
cmake --build build_v100 -j 48
154+
155+
The WarpX application executables are now in ``$HOME/src/warpx/build_v100/bin/``. Additionally, the following commands will install WarpX as a Python module:
156+
157+
.. code-block:: bash
158+
159+
cd $HOME/src/warpx
160+
rm -rf build_v100_py
161+
162+
cmake -S . -B build_v100_py -DWarpX_COMPUTE=CUDA -DWarpX_FFT=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
163+
cmake --build build_v100_py -j 48 --target pip_install
164+
165+
.. tab-item:: CPU Nodes
166+
167+
.. code-block:: bash
168+
169+
cd $HOME/src/warpx
170+
rm -rf build
171+
172+
cmake -S . -B build -DWarpX_FFT=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
173+
cmake --build build -j 48
174+
175+
The WarpX application executables are now in ``$HOME/src/warpx/build/bin/``. Additionally, the following commands will install WarpX as a Python module:
176+
177+
.. code-block:: bash
178+
179+
cd $HOME/src/warpx
180+
rm -rf build_py
181+
182+
cmake -S . -B build_py -DWarpX_FFT=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3"
183+
cmake --build build_py -j 48 --target pip_install
184+
185+
Now, you can :ref:`submit Pitzer compute jobs <running-pitzer>` for WarpX :ref:`Python (PICMI) scripts <usage-picmi>` (:ref:`example scripts <usage-examples>`). Or, you can use the WarpX executables to submit Pitzer jobs (:ref:`example inputs <usage-examples>`). For executables, you can reference their location in your :ref:`job script <running-pitzer>` or copy them to a location in ``/scratch``.
186+
187+
.. _building-pitzer-update:
188+
189+
Update WarpX & Dependencies
190+
---------------------------
191+
192+
If you already installed WarpX in the past and want to update it, start by getting the latest source code:
193+
194+
.. code-block:: bash
195+
196+
cd $HOME/src/warpx
197+
198+
# read the output of this command - does it look ok?
199+
git status
200+
201+
# get the latest WarpX source code
202+
git fetch
203+
git pull
204+
205+
# read the output of these commands - do they look ok?
206+
git status
207+
git log # press q to exit
208+
209+
And, if needed,
210+
211+
- :ref:`update the pitzer_cpu_warpx.profile file <building-pitzer-preparation>`,
212+
- log out and into the system, activate the now updated environment profile as usual,
213+
- :ref:`execute the dependency install scripts <building-pitzer-preparation>`.
214+
215+
As a last step, clean the build directory ``rm -rf $HOME/src/warpx/build_*`` and rebuild WarpX.
216+
217+
.. _running-pitzer:
218+
219+
Running
220+
-------
221+
222+
.. tab-set::
223+
224+
.. tab-item:: V100 GPUs
225+
226+
Pitzer's GPU partition includes:
227+
228+
- 32 nodes, each equipped with two V100 (16GB) GPUs.
229+
- 42 nodes, each with two V100 (32GB) GPUs.
230+
- 4 large memory nodes, each with quad V100 (32GB) GPUs.
231+
232+
To run a WarpX simulation on the GPU nodes, use the batch script provided below. Adjust the ``-N`` parameter in the script to match the number of nodes you intend to use. Each node in this partition supports running one MPI rank per GPU.
233+
234+
.. literalinclude:: ../../../../Tools/machines/pitzer-osc/pitzer_v100.sbatch
235+
:language: bash
236+
:caption: Copy this file from ``$HOME/src/warpx/Tools/machines/pitzer-osc/pitzer_v100.sbatch``.
237+
238+
After preparing your script, submit your job with the following command:
239+
240+
.. code-block:: bash
241+
242+
sbatch pitzer_v100.sbatch
243+
244+
.. tab-item:: CPU Nodes
245+
246+
For CPU-based computations, Pitzer offers:
247+
248+
- 224 nodes, each with dual Intel Xeon Gold 6148 CPUs and 192 GB RAM.
249+
- 340 nodes, each with dual Intel Xeon Platinum 8268 CPUs and 192 GB RAM.
250+
- 16 large memory nodes.
251+
252+
To submit a job to the CPU partition, use the provided batch script. Ensure you have copied the script to your working directory.
253+
254+
.. literalinclude:: ../../../../Tools/machines/pitzer-osc/pitzer_cpu.sbatch
255+
:language: bash
256+
:caption: Copy this file from ``$HOME/src/warpx/Tools/machines/pitzer-osc/pitzer_cpu.sbatch``.
257+
258+
Submit your job with:
259+
260+
.. code-block:: bash
261+
262+
sbatch pitzer_cpu.sbatch
263+
264+
.. _post-processing-osc:
265+
266+
Post-Processing
267+
---------------
268+
269+
For post-processing, many users prefer to use the online `Jupyter service <https://ondemand.osc.edu/pun/sys/dashboard/batch_connect/sessions>`__ (`documentation <https://www.osc.edu/vocabulary/documentation/jupyter>`__) that is directly connected to the cluster's fast filesystem.
270+
271+
.. note::
272+
273+
This section is a stub and contributions are welcome.
274+
We can document further details, e.g., which recommended post-processing Python software to install or how to customize Jupyter kernels here.

0 commit comments

Comments
 (0)