Skip to content

Commit 18f2760

Browse files
author
Tyler Titsworth
authored
Correct TM&B (#329)
Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
1 parent 658e61e commit 18f2760

File tree

13 files changed

+23
-25
lines changed

13 files changed

+23
-25
lines changed

CONTRIBUTING.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Contributing
22

3-
Thank you for considering contributing to Intel® AI Containers! We welcome your help to make this project better. Contributing to an open source project can be a daunting task, but the Intel AI Containers team is here to help you through the process. If at any point in this process you feel out of your depth or confused by our processes, please don't hesitate to reach out to a maintainer or file an [issue](https://github.com/intel/ai-containers/issues).
3+
Thank you for considering contributing to AI Containers! We welcome your help to make this project better. Contributing to an open source project can be a daunting task, but the Intel AI Containers team is here to help you through the process. If at any point in this process you feel out of your depth or confused by our processes, please don't hesitate to reach out to a maintainer or file an [issue](https://github.com/intel/ai-containers/issues).
44

55
## Getting Started
66

@@ -138,4 +138,4 @@ commit automatically with `git commit -s`.
138138

139139
## License
140140

141-
Intel® AI Containers is licensed under the terms in [LICENSE](./LICENSE). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
141+
AI Containers is licensed under the terms in [LICENSE](./LICENSE). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Intel® AI Containers
1+
# AI Containers
22

33
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/8270/badge)](https://www.bestpractices.dev/projects/8270)
44
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/intel/ai-containers/badge)](https://securityscorecards.dev/viewer/?uri=github.com/intel/ai-containers)
@@ -28,7 +28,7 @@ docker login $REGISTRY
2828
docker pull $REGISTRY/$REPO:latest
2929
```
3030

31-
The maintainers of Intel® AI Containers use Azure to store containers, but an open source container registry like [harbor](https://github.com/goharbor/harbor) is preferred.
31+
The maintainers of AI Containers use Azure to store containers, but an open source container registry like [harbor](https://github.com/goharbor/harbor) is preferred.
3232

3333
> [!WARNING]
3434
> You can optionally skip this step and use some placeholder values, however some container groups depend on other images and will pull from a registry that you have not defined and result in an error.

classical-ml/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ The images below additionally include [Jupyter Notebook](https://jupyter.org/) s
6363

6464
## Build from Source
6565

66-
To build the images from source, clone the [Intel® AI Containers](https://github.com/intel/ai-containers) repository, follow the main `README.md` file to setup your environment, and run the following command:
66+
To build the images from source, clone the [AI Containers](https://github.com/intel/ai-containers) repository, follow the main `README.md` file to setup your environment, and run the following command:
6767

6868
```bash
6969
cd classical-ml

mkdocs.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ plugins:
5151
- read_csv
5252
repo_name: intel/ai-containers
5353
repo_url: https://github.com/intel/ai-containers
54-
site_name: Intel® AI Containers
54+
site_name: AI Containers
5555
#TODO: Get previous container versions in an easy way
5656
# https://squidfunk.github.io/mkdocs-material/setup/setting-up-versioning/
5757
theme:

python/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The images below include variations for only the core packages in the [Intel® D
1515

1616
## Build from Source
1717

18-
To build the images from source, clone the [Intel® AI Containers](https://github.com/intel/ai-containers) repository, follow the main `README.md` file to setup your environment, and run the following command:
18+
To build the images from source, clone the [AI Containers](https://github.com/intel/ai-containers) repository, follow the main `README.md` file to setup your environment, and run the following command:
1919

2020
```bash
2121
cd python

pytorch/README.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -241,8 +241,6 @@ Additionally, if you have a [DeepSpeed* configuration](https://www.deepspeed.ai/
241241

242242
---
243243

244-
#### Hugging Face Generative AI Container
245-
246244
The image below is an extension of the IPEX Multi-Node Container designed to run Hugging Face Generative AI scripts. The container has the typical installations needed to run and fine tune PyTorch generative text models from Hugging Face. It can be used to run multinode jobs using the same instructions from the [IPEX Multi-Node container](#setup-and-run-ipex-multi-node-container).
247245

248246
| Tag(s) | Pytorch | IPEX | oneCCL | HF Transformers | Dockerfile |
@@ -324,7 +322,7 @@ The images below additionally include [Jupyter Notebook](https://jupyter.org/) s
324322

325323
## Build from Source
326324

327-
To build the images from source, clone the [Intel® AI Containers](https://github.com/intel/ai-containers) repository, follow the main `README.md` file to setup your environment, and run the following command:
325+
To build the images from source, clone the [AI Containers](https://github.com/intel/ai-containers) repository, follow the main `README.md` file to setup your environment, and run the following command:
328326

329327
```bash
330328
cd pytorch

pytorch/serving/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ As demonstrated in the above example, models must be registered before they can
155155

156156
### KServe
157157

158-
Apply Intel Optimizations to KServe by patching the serving runtimes to use Intel Optimized Serving Containers with `kubectl apply -f patch.yaml`
158+
Apply Intel Optimizations to KServe by patching the serving runtimes to use Serving Containers with Intel Optimizations via `kubectl apply -f patch.yaml`
159159

160160
> [!NOTE]
161161
> You can modify this `patch.yaml` file to change the serving runtime pod configuration.

tensorflow/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -286,7 +286,7 @@ The images below additionally include [Jupyter Notebook](https://jupyter.org/) s
286286

287287
## Build from Source
288288

289-
To build the images from source, clone the [Intel® AI Containers](https://github.com/intel/ai-containers) repository, follow the main `README.md` file to setup your environment, and run the following command:
289+
To build the images from source, clone the [AI Containers](https://github.com/intel/ai-containers) repository, follow the main `README.md` file to setup your environment, and run the following command:
290290

291291
```bash
292292
cd pytorch

workflows/README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Intel® AI Workflows
22

3-
Demonstrating showing how the [Intel® AI Containers] can be used for different use cases:
3+
Demonstrating showing how the [AI Containers] can be used for different use cases:
44

55
## PyTorch Workflows
66

@@ -11,7 +11,7 @@ Demonstrating showing how the [Intel® AI Containers] can be used for different
1111

1212
## Build from Source
1313

14-
To build the images from source, clone the [Intel® AI Containers] repository, follow the main `README.md` file to setup your environment, and run the following command:
14+
To build the images from source, clone the [AI Containers] repository, follow the main `README.md` file to setup your environment, and run the following command:
1515

1616
```bash
1717
cd workflows/charts/huggingface-llm
@@ -21,7 +21,7 @@ docker compose run huggingface-llm sh -c "python /workspace/scripts/finetune.py
2121

2222
## License
2323

24-
View the [License](https://github.com/intel/ai-containers/blob/main/LICENSE) for the [Intel® AI Containers].
24+
View the [License](https://github.com/intel/ai-containers/blob/main/LICENSE) for the [AI Containers].
2525

2626
The images below also contain other software which may be under other licenses (such as Pytorch*, Jupyter*, Bash, etc. from the base).
2727

@@ -31,6 +31,6 @@ It is the image user's responsibility to ensure that any use of The images below
3131

3232
<!--Below are links used in these document. They are not rendered: -->
3333

34-
[Intel® AI Containers]: https://github.com/intel/ai-containers
34+
[AI Containers]: https://github.com/intel/ai-containers
3535
[Distributed LLM Fine Tuning with Kubernetes]: https://github.com/intel/ai-containers/tree/main/workflows/charts/huggingface-llm
3636
[TorchServe* with Kubernetes]: https://github.com/intel/ai-containers/tree/main/workflows/charts/torchserve

workflows/charts/huggingface-llm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -347,4 +347,4 @@ fine tune the model.
347347
```
348348

349349
----------------------------------------------
350-
Autogenerated from chart metadata using [helm-docs v1.13.1](https://github.com/norwoodj/helm-docs/releases/v1.13.1)
350+
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

workflows/charts/torchserve/Chart.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@
1313
# limitations under the License.
1414

1515
apiVersion: v2
16-
name: intel-torchserve
17-
description: Intel TorchServe is a performant, flexible and easy to use tool for serving PyTorch models in production.
16+
name: torchserve-on-intel
17+
description: TorchServe on Intel is a performant, flexible and easy to use tool for serving PyTorch models in production.
1818

1919
# A chart can be either an 'application' or a 'library' chart.
2020
#

workflows/charts/torchserve/README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# Intel TorchServe
1+
# TorchServe with Intel Optimizations
22

3-
Intel TorchServe is a performant, flexible and easy to use tool for serving PyTorch models in production.
3+
TorchServe on Intel is a performant, flexible and easy to use tool for serving PyTorch models in production.
44

5-
For more information about how to use Intel Optimized TorchServe, check out the [container documentation](../../../pytorch/serving/README.md).
5+
For more information about how to use TorchServe with Intel Optimizations, check out the [container documentation](../../../pytorch/serving/README.md).
66

77
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.16.0](https://img.shields.io/badge/AppVersion-1.16.0-informational?style=flat-square)
88

@@ -18,7 +18,7 @@ For more information about how to use Intel Optimized TorchServe, check out the
1818
| deploy.resources.limits | object | `{"cpu":"4000m","memory":"1Gi"}` | Maximum resources per pod |
1919
| deploy.resources.requests | object | `{"cpu":"1000m","memory":"512Mi"}` | Minimum resources per pod |
2020
| deploy.storage.nfs | object | `{"enabled":false,"path":"nil","readOnly":true,"server":"nil","subPath":"nil"}` | Network File System (NFS) storage for models |
21-
| deploy.tokens_disabled | bool | `false` | Set token authentication on or off. Checkout the latest [torchserve docs](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more details. |
21+
| deploy.tokens_disabled | bool | `true` | Set token authentication on or off. Checkout the latest [torchserve docs](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more details. |
2222
| fullnameOverride | string | `""` | Full qualified Domain Name |
2323
| nameOverride | string | `""` | Name of the serving service |
2424
| pvc.size | string | `"1Gi"` | Size of the storage |

workflows/charts/torchserve/README.md.gotmpl

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# Intel TorchServe
1+
# TorchServe with Intel Optimizations
22

33
{{ template "chart.description" . }}
44

5-
For more information about how to use Intel Optimized TorchServe, check out the [container documentation](../../../pytorch/serving/README.md).
5+
For more information about how to use TorchServe with Intel Optimizations, check out the [container documentation](../../../pytorch/serving/README.md).
66

77
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
88

0 commit comments

Comments
 (0)