Skip to content

Latest commit

 

History

History
74 lines (49 loc) · 3.8 KB

README.md

File metadata and controls

74 lines (49 loc) · 3.8 KB

Text-2-Video Generation: Slides Here

This repo demonstrates how to build a text-to-video application using BentoML, powered by XTTS, VLLM, SDXL-TURBO.

Slideshow Picture

Used BentoML demos, including BentoVLLM, BentoXTTS, and BentoSDXLTurbo, and used MoviePy to edit them into a video.

Prerequisites

Install dependencies

git clone https://github.com/CharlesCreativeContent/BentoText2Video.git
cd BentoText2Video
pip install -r requirements.txt

Run the BentoML Service

BentoML Service is defined in service.py.

Run bentoml serve in your project directory to start the Service.

You may also set the environment variable COQUI_TTS_AGREED=1 to agree to the terms of Coqui TTS.

Lock_packages are currently set to False in the bentofile.yaml, which bypasses local builds.

$ COQUI_TOS_AGREED=1 bentoml serve .

2024-01-18T11:13:54+0800 [INFO] [cli] Starting production HTTP BentoServer from "service:XTTS" listening on http://localhost:3000 (Press CTRL+C to quit)
/workspace/codes/examples/xtts/venv/lib/python3.10/site-packages/TTS/api.py:70: UserWarning: `gpu` will be deprecated. Please use `tts.to(device)` instead.
  warnings.warn("`gpu` will be deprecated. Please use `tts.to(device)` instead.")
 > tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded.
 > Using model: xtts

The server is now active at http://localhost:3000.

You can interact with it using the Swagger UI, curl, etc.

CURL

curl -X 'POST' \
  'http://localhost:3000/synthesize' \
  -H 'accept: */*' \
  -H 'Content-Type: application/json' \
  -d '{
  "text": "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
  "lang": "en"
}' -o output.mp4

Deploy to production

After the Service is ready, you can deploy the application to BentoCloud for better management and scalability.

A YAML configuration file (bentofile.yaml) is used to define the build options and package your application into a Bento. See Bento build options to learn more.

Make sure you have logged in to BentoCloud, then run the following command in your project directory to deploy the application to BentoCloud.

bentoml deploy .

Once the application is running on BentoCloud, you can access it with the exposed URL.

Note: You can also use BentoML to generate a Docker image for custom deployments.