You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<pclass="text-gray-700">Maximize training throughput and efficiency with <spanclass="underline"onclick="event.preventDefault(); window.open('https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html', '_blank');">Habana's Gaudi processor</span></p>
<pclass="text-gray-700">Accelerate inference with NVIDIA TensorRT-LLM on the <spanclass="underline"onclick="event.preventDefault(); window.open('https://developer.nvidia.com/blog/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus/', '_blank');">NVIDIA platform</span></p>
<pclass="text-gray-700">Accelerate your training and inference workflows with <spanclass="underline"onclick="event.preventDefault(); window.open('https://aws.amazon.com/machine-learning/trainium/', '_blank');">AWS Trainium</span> and <spanclass="underline"onclick="event.preventDefault(); window.open('https://aws.amazon.com/machine-learning/inferentia/', '_blank');">AWS Inferentia</span></p>
<pclass="text-gray-700">Accelerate inference with NVIDIA TensorRT-LLM on the <spanclass="underline"onclick="event.preventDefault(); window.open('https://developer.nvidia.com/blog/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus/', '_blank');">NVIDIA platform</span></p>
<pclass="text-gray-700">Maximize training throughput and efficiency with <spanclass="underline"onclick="event.preventDefault(); window.open('https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html', '_blank');">Habana's Gaudi processor</span></p>
<pclass="text-gray-700">Fast and efficient inference on <spanclass="underline"onclick="event.preventDefault(); window.open('https://www.furiosa.ai/', '_blank');">FuriosaAI WARBOY</span></p>
47
52
</a>
53
+
</div>
54
+
</div>
55
+
56
+
> [!TIP]
57
+
> Some packages provide hardware-agnostic features (e.g. INC interface in Optimum Intel).
58
+
59
+
60
+
## Open-source integrations
61
+
62
+
🤗 Optimum also supports a variety of open-source frameworks to make model optimization very easy.
<pclass="text-gray-700">Apply quantization and graph optimization to accelerate Transformers models training and inference with <spanclass="underline"onclick="event.preventDefault(); window.open('https://onnxruntime.ai/', '_blank');">ONNX Runtime</span></p>
<pclass="text-gray-700">A one-liner integration to use <spanclass="underline"onclick="event.preventDefault(); window.open('https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/', '_blank');">PyTorch's BetterTransformer</span> with Transformers models</p>
Copy file name to clipboardexpand all lines: examples/onnxruntime/training/image-classification/README.md
+2-3
Original file line number
Diff line number
Diff line change
@@ -11,9 +11,7 @@ See the License for the specific language governing permissions and
11
11
limitations under the License.
12
12
-->
13
13
14
-
# Language Modeling
15
-
16
-
## Image Classification Training
14
+
# Image Classification
17
15
18
16
By running the scripts [`run_image_classification.py`](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/image-classification/run_image_classification.py) we will be able to leverage the [`ONNX Runtime`](https://github.com/microsoft/onnxruntime) accelerator to train the language models from the
0 commit comments