Skip to content

Latest commit

 

History

History
31 lines (24 loc) · 2.13 KB

File metadata and controls

31 lines (24 loc) · 2.13 KB

tensorflow-serving-on-intel

Version: 0.1.0 Type: application AppVersion: 1.16.0

TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data.

Maintainers

Name Email Url
tylertitsworth tyler.titsworth@intel.com https://github.com/tylertitsworth

Values

Key Type Default Description
deploy.env object {"configMapName":"intel-proxy-config","enabled":true} Add Environment mapping
deploy.image string "intel/intel-extension-for-tensorflow:serving-gpu" Intel Extension for Tensorflow Serving image
deploy.modelName string "" Model Name
deploy.replicas int 1 Number of pods
deploy.resources.limits object {"cpu":"4000m","gpu.intel.com/i915":1,"memory":"1Gi"} Maximum resources per pod
deploy.resources.limits."gpu.intel.com/i915" int 1 Intel GPU Device Configuration
deploy.resources.requests object {"cpu":"1000m","memory":"512Mi"} Minimum resources per pod
deploy.storage.nfs object {"enabled":false,"path":"nil","readOnly":true,"server":"nil"} Network File System (NFS) storage for models
fullnameOverride string "" Full qualified Domain Name
nameOverride string "" Name of the serving service
pvc.size string "5Gi" Size of the storage
service.type string "NodePort" Type of service

Autogenerated from chart metadata using helm-docs v1.14.2