Skip to content

Commit f300865

Browse files
authored
Update ORT doc for ROCM 6.0 (#1862)
* Update ORT doc for ROCM 6.0 * Update amdgpu.mdx
1 parent 6d56c5f commit f300865

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

docs/source/onnxruntime/usage_guides/amdgpu.mdx

+8-8
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@ Our testing involved AMD Instinct GPUs, and for specific GPU compatibility, plea
77
This guide will show you how to run inference on the `ROCMExecutionProvider` execution provider that ONNX Runtime supports for AMD GPUs.
88

99
## Installation
10-
The following setup installs the ONNX Runtime support with ROCM Execution Provider with ROCm 5.7.
10+
The following setup installs the ONNX Runtime support with ROCM Execution Provider with ROCm 6.0.
1111

1212
#### 1 ROCm Installation
1313

14-
Refer to the [ROCm installation guide](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) to install ROCm 5.7.
14+
Refer to the [ROCm installation guide](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) to install ROCm 6.0.
1515

1616
#### 2 Installing `onnxruntime-rocm`
1717

@@ -26,11 +26,11 @@ docker build -f Dockerfile -t ort/rocm .
2626
**Local Installation Steps:**
2727

2828
##### 2.1 PyTorch with ROCm Support
29-
Optimum ONNX Runtime integration relies on some functionalities of Transformers that require PyTorch. For now, we recommend to use Pytorch compiled against RoCm 5.7, that can be installed following [PyTorch installation guide](https://pytorch.org/get-started/locally/):
29+
Optimum ONNX Runtime integration relies on some functionalities of Transformers that require PyTorch. For now, we recommend to use Pytorch compiled against RoCm 6.0, that can be installed following [PyTorch installation guide](https://pytorch.org/get-started/locally/):
3030

3131
```bash
32-
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7
33-
# Use 'rocm/pytorch:latest' as the preferred base image when using Docker for PyTorch installation.
32+
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0
33+
# Use 'rocm/pytorch:rocm6.0.2_ubuntu22.04_py3.10_pytorch_2.1.2' as the preferred base image when using Docker for PyTorch installation.
3434
```
3535

3636
##### 2.2 ONNX Runtime with ROCm Execution Provider
@@ -42,13 +42,13 @@ pip install cmake onnx
4242
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
4343

4444
# Install ONNXRuntime from source
45-
git clone --recursive https://github.com/ROCmSoftwarePlatform/onnxruntime.git
45+
git clone --single-branch --branch main --recursive https://github.com/Microsoft/onnxruntime onnxruntime
4646
cd onnxruntime
47-
git checkout rocm5.7_internal_testing_eigen-3.4.zip_hash
4847

49-
./build.sh --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm
48+
./build.sh --config Release --build_wheel --allow_running_as_root --update --build --parallel --cmake_extra_defines CMAKE_HIP_ARCHITECTURES=gfx90a,gfx942 ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm
5049
pip install build/Linux/Release/dist/*
5150
```
51+
Note: The instructions build ORT for `MI210/MI250/MI300` gpus. To support other architectures, please update the `CMAKE_HIP_ARCHITECTURES` in the build command.
5252

5353
<Tip>
5454
To avoid conflicts between `onnxruntime` and `onnxruntime-rocm`, make sure the package `onnxruntime` is not installed by running `pip uninstall onnxruntime` prior to installing `onnxruntime-rocm`.

0 commit comments

Comments
 (0)