You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: workflows/charts/tensorflow-serving/Chart.yaml
+1-1
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@
13
13
# limitations under the License.
14
14
15
15
apiVersion: v2
16
-
name: tensorflow-serving-on-intel
16
+
name: tensorflow-serving
17
17
description: TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.
18
18
19
19
# A chart can be either an 'application' or a 'library' chart.
Copy file name to clipboardexpand all lines: workflows/charts/tensorflow-serving/README.md
+9-6
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,17 @@
1
-
# tensorflow-serving-on-intel
1
+
# TensorFlow Serving on Intel GPU
2
+
3
+
TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.
TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.
Copy file name to clipboardexpand all lines: workflows/charts/torchserve/README.md
+11-2
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,20 @@
1
1
# TorchServe with Intel Optimizations
2
2
3
-
TorchServe on Intel is a performant, flexible and easy to use tool for serving PyTorch models in production.
3
+
TorchServe is a performant, flexible and easy to use tool for serving PyTorch models in production on Intel GPUs.
4
4
5
-
For more information about how to use TorchServe with Intel Optimizations, check out the [container documentation](../../../pytorch/serving/README.md).
5
+
For more information about how to use TorchServe with Intel Optimizations, check out the [container documentation](https://github.com/intel/ai-containers/blob/main/pytorch/serving/README.md).
Copy file name to clipboardexpand all lines: workflows/charts/torchserve/README.md.gotmpl
+9-2
Original file line number
Diff line number
Diff line change
@@ -2,11 +2,18 @@
2
2
3
3
{{ template "chart.description" . }}
4
4
5
-
For more information about how to use TorchServe with Intel Optimizations, check out the [container documentation](../../../pytorch/serving/README.md).
5
+
For more information about how to use TorchServe with Intel Optimizations, check out the [container documentation](https://github.com/intel/ai-containers/blob/main/pytorch/serving/README.md).
0 commit comments