Skip to content

Commit 09dfd92

Browse files
mfuntowiczregisss
authored andcommitted
Add Google TPU to the mix (huggingface#1797)
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
1 parent edd0254 commit 09dfd92

File tree

5 files changed

+68
-3
lines changed

5 files changed

+68
-3
lines changed

.github/workflows/build_main_documentation.yml

+16-1
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,11 @@ jobs:
4949
repository: 'huggingface/optimum-amd'
5050
path: optimum-amd
5151

52+
- uses: actions/checkout@v2
53+
with:
54+
repository: 'huggingface/optimum-tpu'
55+
path: optimum-tpu
56+
5257
- name: Free disk space
5358
run: |
5459
df -h
@@ -150,6 +155,16 @@ jobs:
150155
mv furiosa-doc-build ../optimum
151156
cd ..
152157
158+
- name: Make TPU documentation
159+
run: |
160+
sudo docker system prune -a -f
161+
cd optimum-tpu
162+
pip install -U pip
163+
pip install .
164+
doc-builder build optimum.tpu docs/source/ --build_dir tpu-doc-build --version pr_$PR_NUMBER --version_tag_suffix "" --html --clean
165+
mv tpu-doc-build ../optimum
166+
cd ..
167+
153168
- name: Make AMD documentation
154169
run: |
155170
sudo docker system prune -a -f
@@ -171,7 +186,7 @@ jobs:
171186
- name: Combine subpackage documentation
172187
run: |
173188
cd optimum
174-
sudo python docs/combine_docs.py --subpackages nvidia amd intel neuron habana furiosa --version ${{ env.VERSION }}
189+
sudo python docs/combine_docs.py --subpackages nvidia amd intel neuron tpu habana furiosa --version ${{ env.VERSION }}
175190
cd ..
176191
177192
- name: Push to repositories

.github/workflows/build_pr_documentation.yml

+16-1
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,11 @@ jobs:
5353
repository: 'huggingface/optimum-amd'
5454
path: optimum-amd
5555

56+
- uses: actions/checkout@v2
57+
with:
58+
repository: 'huggingface/optimum-tpu'
59+
path: optimum-tpu
60+
5661
- name: Setup environment
5762
run: |
5863
pip uninstall -y doc-builder
@@ -91,6 +96,16 @@ jobs:
9196
sudo mv amd-doc-build ../optimum
9297
cd ..
9398
99+
- name: Make TPU documentation
100+
run: |
101+
sudo docker system prune -a -f
102+
cd optimum-tpu
103+
pip install -U pip
104+
pip install .
105+
doc-builder build optimum.tpu docs/source/ --build_dir tpu-doc-build --version pr_$PR_NUMBER --version_tag_suffix "" --html --clean
106+
mv tpu-doc-build ../optimum
107+
cd ..
108+
94109
- name: Make Optimum documentation
95110
run: |
96111
sudo docker system prune -a -f
@@ -101,7 +116,7 @@ jobs:
101116
- name: Combine subpackage documentation
102117
run: |
103118
cd optimum
104-
sudo python docs/combine_docs.py --subpackages nvidia amd intel neuron habana furiosa --version pr_$PR_NUMBER
119+
sudo python docs/combine_docs.py --subpackages nvidia amd intel neuron tpu habana furiosa --version pr_$PR_NUMBER
105120
sudo mv optimum-doc-build ../
106121
cd ..
107122

docs/combine_docs.py

+28
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,31 @@ def add_neuron_doc(base_toc: List):
108108
)
109109

110110

111+
def add_tpu_doc(base_toc: List):
112+
"""
113+
Extends the table of content with a section about Optimum TPU.
114+
115+
Args:
116+
base_toc (List): table of content for the doc of Optimum.
117+
"""
118+
# Update optimum table of contents
119+
base_toc.insert(
120+
SUBPACKAGE_TOC_INSERT_INDEX,
121+
{
122+
"sections": [
123+
{
124+
# Ideally this should directly point at https://huggingface.co/docs/optimum-tpu/index
125+
# Current hacky solution is to have a redirection in _redirects.yml
126+
"local": "docs/optimum-tpu/index",
127+
"title": "🤗 Optimum-TPU",
128+
}
129+
],
130+
"title": "Google TPUs",
131+
"isExpanded": False,
132+
},
133+
)
134+
135+
111136
def main():
112137
args = parser.parse_args()
113138
optimum_path = Path("optimum-doc-build")
@@ -121,6 +146,9 @@ def main():
121146
if subpackage == "neuron":
122147
# Neuron has its own doc so it is managed differently
123148
add_neuron_doc(base_toc)
149+
elif subpackage == "tpu":
150+
# Optimum TPU has its own doc so it is managed differently
151+
add_tpu_doc(base_toc)
124152
elif subpackage == "nvidia":
125153
# At the moment, Optimum Nvidia's doc is the README of the GitHub repo
126154
# It is linked to in optimum/docs/source/nvidia_overview.mdx

docs/source/_redirects.yml

+3
Original file line numberDiff line numberDiff line change
@@ -28,3 +28,6 @@ intel_trainer: intel/reference_inc
2828

2929
# Optimum Neuron
3030
docs/optimum-neuron/index: /docs/optimum-neuron/index
31+
32+
# Optimum TPU
33+
docs/optimum-tpu/index: /docs/optimum-tpu/index

docs/source/index.mdx

+5-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ As such, Optimum enables developers to efficiently use any of these platforms wi
2525
The packages below enable you to get the best of the 🤗 Hugging Face ecosystem on various types of devices.
2626

2727
<div class="mt-10">
28-
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-3 md:gap-y-4 md:gap-x-5">
28+
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-4 md:gap-y-4 md:gap-x-5">
2929
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="https://github.com/huggingface/optimum-nvidia"
3030
><div class="w-full text-center bg-gradient-to-br from-green-600 to-green-600 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">NVIDIA</div>
3131
<p class="text-gray-700">Accelerate inference with NVIDIA TensorRT-LLM on the <span class="underline" onclick="event.preventDefault(); window.open('https://developer.nvidia.com/blog/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus/', '_blank');">NVIDIA platform</span></p>
@@ -42,6 +42,10 @@ The packages below enable you to get the best of the 🤗 Hugging Face ecosystem
4242
><div class="w-full text-center bg-gradient-to-br from-orange-400 to-orange-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">AWS Trainium/Inferentia</div>
4343
<p class="text-gray-700">Accelerate your training and inference workflows with <span class="underline" onclick="event.preventDefault(); window.open('https://aws.amazon.com/machine-learning/trainium/', '_blank');">AWS Trainium</span> and <span class="underline" onclick="event.preventDefault(); window.open('https://aws.amazon.com/machine-learning/inferentia/', '_blank');">AWS Inferentia</span></p>
4444
</a>
45+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="https://huggingface.co/docs/optimum-tpu/index"
46+
><div class="w-full text-center bg-gradient-to-tr from-blue-200 to-blue-600 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Google TPUs</div>
47+
<p class="text-gray-700">Accelerate your training and inference workflows with <span class="underline" onclick="event.preventDefault(); window.open('https://cloud.google.com/tpu', '_blank');">Google TPUs</span></p>
48+
</a>
4549
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./habana/index"
4650
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Habana</div>
4751
<p class="text-gray-700">Maximize training throughput and efficiency with <span class="underline" onclick="event.preventDefault(); window.open('https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html', '_blank');">Habana's Gaudi processor</span></p>

0 commit comments

Comments
 (0)