Skip to content

Commit 03e8e67

Browse files
author
Tyler Titsworth
authored
TensorFlow Serving on XPU Chart (#359)
Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
1 parent 9878991 commit 03e8e67

File tree

11 files changed

+393
-1
lines changed

11 files changed

+393
-1
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Patterns to ignore when building packages.
2+
# This supports shell glob matching, relative path matching, and
3+
# negation (prefixed with !). Only one pattern per line.
4+
.DS_Store
5+
# Common VCS dirs
6+
.git/
7+
.gitignore
8+
.bzr/
9+
.bzrignore
10+
.hg/
11+
.hgignore
12+
.svn/
13+
# Common backup files
14+
*.swp
15+
*.bak
16+
*.tmp
17+
*.orig
18+
*~
19+
# Various IDEs
20+
.project
21+
.idea/
22+
*.tmproj
23+
.vscode/
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# Copyright (c) 2024 Intel Corporation
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
apiVersion: v2
16+
name: tensorflow-serving-on-intel
17+
description: TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.
18+
19+
# A chart can be either an 'application' or a 'library' chart.
20+
#
21+
# Application charts are a collection of templates that can be packaged into versioned archives
22+
# to be deployed.
23+
#
24+
# Library charts provide useful utilities or functions for the chart developer. They're included as
25+
# a dependency of application charts to inject those utilities and functions into the rendering
26+
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
27+
maintainers:
28+
- name: tylertitsworth
29+
email: tyler.titsworth@intel.com
30+
url: https://github.com/tylertitsworth
31+
type: application
32+
33+
# This is the chart version. This version number should be incremented each time you make changes
34+
# to the chart and its templates, including the app version.
35+
# Versions are expected to follow Semantic Versioning (https://semver.org/)
36+
version: 0.1.0
37+
38+
# This is the version number of the application being deployed. This version number should be
39+
# incremented each time you make changes to the application. Versions are not expected to
40+
# follow Semantic Versioning. They should reflect the version the application is using.
41+
# It is recommended to use it with quotes.
42+
appVersion: "1.16.0"
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# tensorflow-serving-on-intel
2+
3+
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.16.0](https://img.shields.io/badge/AppVersion-1.16.0-informational?style=flat-square)
4+
5+
TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.
6+
7+
## Maintainers
8+
9+
| Name | Email | Url |
10+
| ---- | ------ | --- |
11+
| tylertitsworth | <tyler.titsworth@intel.com> | <https://github.com/tylertitsworth> |
12+
13+
## Values
14+
15+
| Key | Type | Default | Description |
16+
|-----|------|---------|-------------|
17+
| deploy.env | object | `{"configMapName":"intel-proxy-config","enabled":true}` | Add Environment mapping |
18+
| deploy.image | string | `"intel/intel-extension-for-tensorflow:serving-gpu"` | Intel Extension for Tensorflow Serving image |
19+
| deploy.modelName | string | `""` | Model Name |
20+
| deploy.replicas | int | `1` | Number of pods |
21+
| deploy.resources.limits | object | `{"cpu":"4000m","gpu.intel.com/i915":1,"memory":"1Gi"}` | Maximum resources per pod |
22+
| deploy.resources.limits."gpu.intel.com/i915" | int | `1` | Intel GPU Device Configuration |
23+
| deploy.resources.requests | object | `{"cpu":"1000m","memory":"512Mi"}` | Minimum resources per pod |
24+
| deploy.storage.nfs | object | `{"enabled":false,"path":"nil","readOnly":true,"server":"nil"}` | Network File System (NFS) storage for models |
25+
| fullnameOverride | string | `""` | Full qualified Domain Name |
26+
| nameOverride | string | `""` | Name of the serving service |
27+
| pvc.size | string | `"5Gi"` | Size of the storage |
28+
| service.type | string | `"NodePort"` | Type of service |
29+
30+
----------------------------------------------
31+
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
1. Get the application URL by running these commands:
2+
{{- if contains "NodePort" .Values.service.type }}
3+
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "tensorflow-serving.fullname" . }})
4+
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
5+
echo http://$NODE_IP:$NODE_PORT
6+
{{- else if contains "LoadBalancer" .Values.service.type }}
7+
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
8+
You can watch its status by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "tensorflow-serving.fullname" . }}'
9+
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "tensorflow-serving.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
10+
echo http://$SERVICE_IP:{{ .Values.service.port }}
11+
{{- else if contains "ClusterIP" .Values.service.type }}
12+
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "tensorflow-serving.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
13+
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
14+
echo "Visit http://127.0.0.1:8080 to use your application"
15+
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
16+
{{- end }}
17+
2. Make a prediction
18+
curl http://$NODE_IP:$NODE_PORT/v1/models/{{ .Values.deploy.modelName }}
19+
curl -X POST http://$NODE_IP:$NODE_PORT/v1/models/{{ .Values.deploy.modelName }}:predict -d '{"data": []}'
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
{{/*
2+
Expand the name of the chart.
3+
*/}}
4+
{{- define "tensorflow-serving.name" -}}
5+
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
6+
{{- end }}
7+
8+
{{/*
9+
Create a default fully qualified app name.
10+
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
11+
If release name contains chart name it will be used as a full name.
12+
*/}}
13+
{{- define "tensorflow-serving.fullname" -}}
14+
{{- if .Values.fullnameOverride }}
15+
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
16+
{{- else }}
17+
{{- $name := default .Chart.Name .Values.nameOverride }}
18+
{{- if contains $name .Release.Name }}
19+
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
20+
{{- else }}
21+
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
22+
{{- end }}
23+
{{- end }}
24+
{{- end }}
25+
26+
{{/*
27+
Create chart name and version as used by the chart label.
28+
*/}}
29+
{{- define "tensorflow-serving.chart" -}}
30+
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
31+
{{- end }}
32+
33+
{{/*
34+
Common labels
35+
*/}}
36+
{{- define "tensorflow-serving.labels" -}}
37+
helm.sh/chart: {{ include "tensorflow-serving.chart" . }}
38+
{{ include "tensorflow-serving.selectorLabels" . }}
39+
{{- if .Chart.AppVersion }}
40+
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
41+
{{- end }}
42+
app.kubernetes.io/managed-by: {{ .Release.Service }}
43+
{{- end }}
44+
45+
{{/*
46+
Selector labels
47+
*/}}
48+
{{- define "tensorflow-serving.selectorLabels" -}}
49+
app.kubernetes.io/name: {{ include "tensorflow-serving.name" . }}
50+
app.kubernetes.io/instance: {{ .Release.Name }}
51+
{{- end }}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# Copyright (c) 2024 Intel Corporation
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
{{- $name := .Values.deploy.modelName | required ".Values.deploy.modelName is required." -}}
15+
---
16+
apiVersion: apps/v1
17+
kind: Deployment
18+
metadata:
19+
name: {{ include "tensorflow-serving.fullname" . }}
20+
labels:
21+
{{- include "tensorflow-serving.labels" . | nindent 4 }}
22+
spec:
23+
replicas: {{ .Values.deploy.replicas }}
24+
selector:
25+
matchLabels:
26+
{{- include "tensorflow-serving.selectorLabels" . | nindent 6 }}
27+
template:
28+
metadata:
29+
labels:
30+
{{- include "tensorflow-serving.labels" . | nindent 8 }}
31+
spec:
32+
securityContext:
33+
fsGroup: 1000
34+
runAsUser: 1000
35+
containers:
36+
- name: tensorflow-serving
37+
image: {{ .Values.deploy.image }}
38+
{{- if eq .Values.deploy.env.enabled true }}
39+
envFrom:
40+
- configMapRef:
41+
name: {{ .Values.deploy.env.configMapName }}
42+
{{- end }}
43+
env:
44+
- name: MODEL_NAME
45+
value: {{ .Values.deploy.modelName }}
46+
ports:
47+
- name: rest
48+
containerPort: 8500
49+
protocol: TCP
50+
- name: grpc
51+
containerPort: 8501
52+
protocol: TCP
53+
readinessProbe:
54+
tcpSocket:
55+
port: rest
56+
initialDelay: 15
57+
timeoutSeconds: 1
58+
volumeMounts:
59+
- mountPath: /dev/shm
60+
name: dshm
61+
{{- if .Values.deploy.storage.nfs.enabled }}
62+
- name: model
63+
mountPath: /models/{{ .Values.deploy.modelName }}
64+
{{- else }}
65+
- name: model
66+
mountPath: /models/{{ .Values.deploy.modelName }}
67+
{{- end }}
68+
resources:
69+
{{- toYaml .Values.deploy.resources | nindent 12 }}
70+
volumes:
71+
- name: dshm
72+
emptyDir:
73+
medium: Memory
74+
{{- if .Values.deploy.storage.nfs.enabled }}
75+
- name: model
76+
nfs:
77+
server: {{ .Values.deploy.storage.nfs.server }}
78+
path: {{ .Values.deploy.storage.nfs.path }}
79+
readOnly: {{ .Values.deploy.storage.nfs.readOnly }}
80+
{{- else }}
81+
- name: model
82+
persistentVolumeClaim:
83+
claimName: {{ include "tensorflow-serving.fullname" . }}-model-dir
84+
{{- end }}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Copyright (c) 2024 Intel Corporation
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
---
16+
{{- if not .Values.deploy.storage.nfs.enabled }}
17+
apiVersion: v1
18+
kind: PersistentVolumeClaim
19+
metadata:
20+
name: {{ include "tensorflow-serving.fullname" . }}-model-dir
21+
labels:
22+
{{- include "tensorflow-serving.labels" . | nindent 4 }}
23+
spec:
24+
accessModes:
25+
- ReadWriteMany
26+
resources:
27+
requests:
28+
storage: {{ .Values.pvc.size }}
29+
{{- end }}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# Copyright (c) 2024 Intel Corporation
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
apiVersion: v1
16+
kind: Service
17+
metadata:
18+
name: {{ include "tensorflow-serving.fullname" . }}
19+
labels:
20+
{{- include "tensorflow-serving.labels" . | nindent 4 }}
21+
spec:
22+
type: {{ .Values.service.type }}
23+
ports:
24+
- name: rest
25+
port: 8500
26+
targetPort: rest
27+
- name: grpc
28+
port: 8501
29+
targetPort: grpc
30+
selector:
31+
{{- include "tensorflow-serving.selectorLabels" . | nindent 4 }}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Copyright (c) 2024 Intel Corporation
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
apiVersion: v1
16+
kind: Pod
17+
metadata:
18+
name: "{{ include "tensorflow-serving.fullname" . }}-test-connection"
19+
labels:
20+
{{- include "tensorflow-serving.labels" . | nindent 4 }}
21+
annotations:
22+
"helm.sh/hook": test
23+
spec:
24+
containers:
25+
- name: info
26+
image: curlimages/curl
27+
command: ['sh', '-c']
28+
args: ['curl -f {{ include "tensorflow-serving.fullname" . }}:8501/v1/models/{{ .Values.deploy.modelName}}']
29+
restartPolicy: OnFailure

0 commit comments

Comments
 (0)