An intelligent solution for efficient incident resolution using LLM and RAG.
- 📖 Introduction
- 🛠️ Installation
- 🔧 Launch Server
- 🔒 Default Admin Credentials
- ⚙️ Configure LLM
- 📝 User Evaluation
IncidentNavigator is a solution designed to streamline recurring incident resolution. Using a Large Language Model (LLM) and Retrieval-Augmented Generation (RAG), it offers:
- Dynamic interaction for clarifying queries.
- Transparent, fact-based responses.
- Adaptability to structured and unstructured data.
Demo: Watch the application in action on YouTube.
Ensure the following are installed on your system:
git clone https://github.com/MrSneaker/IncidentNavigatorProject.git
cd IncidentNavigatorProject
This script will handle the creation of the virtual environment, installation of required Python packages, and any other necessary setup steps.
sudo ./app/install/install.sh
Note: Dependancies installations could take a while, depending on your network speed.
This script is used to uninstall the application.It performs the necessary steps to remove the application and its associated files from the system.
sudo ./app/install/uninstall.sh
-
Navigate to the installation directory:
cd app/install
-
Docker services:
Docker allows you to build and run services along with its dependencies in a containerized environment making it so you do not need to install them locally. As our applications need Weaviate, MongoDB and Redis we thus provide a docker-compose.yml file to make it easier to install and use them.
-
Start :
docker compose up -d
-
Stop :
docker compose down
Note: If you encounter permission issues with Docker on Linux, try running the commands with sudo or ensure your user is added to the Docker group (
sudo usermod -aG docker $USER
) -
-
Set up a Python environment:
python -m venv .env source .env/bin/activate # Linux/macOS pip install -r requirements.txt
-
Populate databases:
python3 create_dbs.py # Use [-c | --clear] to reset databases.
sudo ./app/install/start.sh
This script will start all components for you. Navigate to http://localhost:5000 to access the homepage.
sudo ./app/install/stop.sh
This script will stop all running components.
-
Navigate to the project root directory:
Ensure you are in the root directory of the project before proceeding with the following steps.
cd /path/to/IncidentNavigatorProject
-
Set up the Python environment:
Activate the installed environment:
source app/install/.env/bin/activate # Linux/macOS
-
Launch the Flask application:
python3 app/app.pySimplifiez les phrases pour les rendre plus fluides en français. Par exemple :lhost:5000, you will be directed to the homepage.
The application comes with a default admin account for initial use:
- Email:
admin@example.com
- Password:
admin
Important: It is highly recommended to change the password after the first login to secure the application.
To configure the Large Language Model (LLM), the admin should follow these steps:
-
Log in to the application using the admin account.
-
Navigate to the Settings page.
-
In the Add New LLM Configuration section, provide the following example details:
- LLM Model:
llama-3.3-70b-versatiile
(example model) - LLM Model URI:
https://api.groq.com/openai/v1/
(example URI) - API Key: Provide your API key
- LLM Model:
-
Save the configuration to enable the new LLM model.
Note: Ensure the API key is kept secure and only shared with authorized personnel.
Participate in our user evaluation by filling out this survey.