Translating natural language into a query and then executing this in a database.
Download Ollama from the official website here.
In your terminal, download Llama 3 via ollama pull llama3
(takes ~30 mins to download).
Create a service account key for the project in GCP that you want to access here. Then store this in as profiles/gcp_service_account_key.json
.
Now set-up the project by running the following:
make setup-local
To ask the Llama3 a question about your data, run the following in your terminal:
poetry run streamlit run app.py
To build the Docker container and image for this, run the below:
docker-compose up
When not in use, run:
docker-compose down
You can inspect the actual container via:
docker container list
# find the ID of your docker container
docker exec -it <container_id> bash