A sanctuary against corporate dehumanization.
Phoenix Rising was built to empower individuals in an era where human emotions are often commodified under the guise of "wellness." This project provides a private, secure, and authentic space for emotional reflection, leveraging AI to transform personal experiences into uplifting tokens of light.
Developed during the GitHub Copilot 1-Day Build Challenge, this application showcases how AI can enable meaningful development even under tight constraints.
The term "cutting-edge AI" here refers to the utilization of robust models tailored for specific tasks. Due to budget constraints, Phoenix Rising employs:
typeform/distilbert-base-uncased-mnli
: A lightweight, efficient model for sentiment analysis.microsoft/Phi-3-medium-4k-instruct
: A versatile instruction-based model for generating chat responses.
These models strike a balance between cost and functionality, allowing the project to remain accessible and functional. However, better responses naturally correlate with higher-quality models. Users can adapt the application by upgrading models, configuring .env
variables, and ensuring adequate token budgets.
The main interface is designed to provide a serene, intuitive environment where users can:
- Reflect: Choose from a range of emotions to set the tone of your journaling session.
- Express: Share your thoughts and transform them into uplifting "light tokens."
- Explore: Gain insights into your emotional progression through interactive visuals.
Dive deep into your emotional journey with actionable insights and analytics:
- Trends Over Time: View how your emotions evolve through a detailed graph.
- State Distribution: Explore the balance between emotional states using a clean pie chart.
- Personalized Tokens: Revisit the light tokens generated during your reflections.
Each element of Phoenix Rising is tailored to support your emotional growth while creating an aesthetically pleasing experience. These screenshots highlight the application's ability to combine cutting-edge AI with thoughtful design for meaningful interactions.
This project was developed for the GitHub Copilot 1-Day Build Challenge, which tasked participants with creating a functional, innovative solution in just 24 hours using AI-powered tools.
To ensure full transparency and accountability, the development process for Phoenix Rising has been documented in detail. This log is available in the root folder as a PDF file: boilerplate_prompt_history_transparency.pdf. The file captures the interplay between the developer and the AI tools used throughout the project, highlighting key prompts, outputs, and decisions.
The PDF follows a clear and structured format to document the iterative development process:
- : The AI model employed for the interaction (e.g.,
Claude Sonnet 3.5
,GitHub Copilot
). - : A sequential numbering of the prompts.
- : The exact input provided by the developer to the AI model.
- : A divider to distinguish input from output for clarity.
- : The raw response generated by the AI model.
This structure allows anyone reviewing the project to trace how specific features and functionalities were shaped by AI, as well as how developer expertise guided and refined the outcomes.
This transparency effort ensures:
- Accountability: Clear differentiation between AI-generated outputs and human contributions.
- Integrity: Full visibility into how AI tools were used during the initial boilerplate setup of the GitHub Copilot 1-Day Build Challenge.
- Learning Opportunity: A resource for understanding how AI can accelerate development in constrained timeframes.
Not all prompts were recorded, only the initial boilerplate part, because I've been coding for more than 12 hours, am tired and it would take too long for me to record the ENTIRE process in such structured way.
- Local Storage: SQLite local storage.
- AI-Powered Sentiment Analysis: Uses Hugging Face models for nuanced emotional insights.
- Token Generation: Produces uplifting messages tailored to user experiences and emotions.
- Customizable Emotional Tracking: Tracks progress and visualizes emotional growth.
The system is designed for simplicity, privacy, and scalability:
phoenix_rising/
βββ src/
β βββ app.py # Main entry point for the Streamlit app
β βββ llm_service.py # Core logic for LLM interaction
β βββ database.py # SQLite ORM layer for local data storage
β βββ schemas.py # Pydantic models for validation
β βββ sentiment_service.py # Custom sentiment analysis logic
β βββ utils.py # Utility functions
βββ assets/
β βββ prompts/
β βββ light_seeds.json # AI prompt configurations
βββ tests/
β βββ test_llm_service.py # Unit tests for LLM interactions
β βββ test_database.py # Unit tests for database functions
β βββ test_utils.py # Unit tests for helper functions
βββ boilerplate_prompt_history.txt # Initial development logs for transparency
βββ poetry.lock
βββ pyproject.toml
βββ LICENSE
βββ README.md
- Python 3.10: Backend logic and services.
- Streamlit: User interface for journaling and visualizations.
- Hugging Face Hub: Sentiment analysis and therapeutic text generation.
- SQLite: Lightweight, secure data storage.
- Poetry: Dependency management.
- Tenacity: Retry mechanisms for API resilience.
To run Phoenix Rising, follow these steps:
-
Clone the repository:
git clone https://github.com/ericsonwillians/phoenix-rising.git cd phoenix-rising
-
Install dependencies:
poetry install
-
Configure the
.env
file:Create a
.env
file in the project root and configure it with the following variables:HUGGINGFACE_API_TOKEN=<your-api-token> CHAT_MODEL_ENDPOINT=<chat-endpoint-url> CHAT_MODEL_PIPELINE=text-generation SENTIMENT_MODEL_ENDPOINT=<sentiment-endpoint-url> SENTIMENT_MODEL_PIPELINE=zero-shot-classification
Suggested Models:
- Sentiment Analysis: typeform/distilbert-base-uncased-mnli
- Chat Generation: microsoft/Phi-3-medium-4k-instruct
-
Run the application:
poetry run python src/app.py
The application uses two pre-trained models deployed via Hugging Face Inference Endpoints:
-
Sentiment Analysis:
- Model:
typeform/distilbert-base-uncased-mnli
- Pipeline:
zero-shot-classification
- Purpose: Assigns emotional labels to text inputs based on natural language inference (NLI).
- Model:
-
Chat Generation:
- Model:
microsoft/Phi-3-medium-4k-instruct
- Pipeline:
text-generation
- Purpose: Generates reflective and uplifting responses tailored to emotional states.
- Model:
Hugging Face Inference Endpoints rely on robust NVIDIA hardware, typically provisioned via AWS or GCP. To run this application, you must set up these endpoints remotely on Hugging Face. Hereβs how:
-
Set Up a Hugging Face Account:
- Sign up at Hugging Face.
- Navigate to Settings > Access Tokens and create an API token.
-
Create Inference Endpoints:
- Deploy the models via Hugging Face's Inference API:
- Choose
typeform/distilbert-base-uncased-mnli
for sentiment analysis. - Choose
microsoft/Phi-3-medium-4k-instruct
for text generation.
- Choose
- The deployment process requires selecting hardware with NVIDIA GPUs (e.g., A100 or T4 GPUs) through Hugging Face, backed by AWS or GCP.
- Deploy the models via Hugging Face's Inference API:
-
Allocate Budget:
- Running these endpoints incurs costs based on the compute resources and model usage. Ensure sufficient credits in your Hugging Face account.
-
Endpoint URLs:
- Once deployed, note down the API URLs for the endpoints and set them in the
.env
file underCHAT_MODEL_ENDPOINT
andSENTIMENT_MODEL_ENDPOINT
.
- Once deployed, note down the API URLs for the endpoints and set them in the
-
Budget Awareness:
- Costs for GPU-backed inference endpoints can add up quickly. Monitor your usage and configure smaller models if needed.
-
Setting Up:
- Use the Hugging Face dashboard to manage endpoints and monitor logs.
- Test endpoints with small inputs before integrating them into the app.
-
Token Management:
- Use the API token generated in your Hugging Face profile for authentication.
- Ensure the token has sufficient permissions for accessing inference endpoints.
By following these steps, you can successfully run Phoenix Rising using Hugging Face's remote inference service. For more details, consult the Hugging Face Inference Endpoints Documentation.
- Local-Only Data: All data is stored locally on your machine.
- No External Tracking: No analytics or third-party monitoring.
- Encryption: End-to-end encryption safeguards all user inputs and outputs.
Due to the time constraints of the challenge, the following improvements are planned:
-
Code Refactoring:
- Move classes from
llm_service.py
into separate files to improve maintainability. - Simplify complex methods and modularize functionality further.
- Move classes from
-
Unit Testing:
- Enhance test coverage, especially for edge cases and error handling.
- Refactor existing tests to align with potential architectural changes.
-
Performance Optimization:
- Optimize API calls to Hugging Face for reduced latency.
- Add caching mechanisms for repeated sentiment analyses.
-
Enhanced UI/UX:
- Introduce more intuitive visualizations for emotional tracking.
- Provide detailed feedback for each generated token of light.
-
Documentation:
- Extend inline comments and docstrings for improved developer onboarding.
- Create API documentation for future integration capabilities.
If you encounter issues while running Phoenix Rising, this section provides solutions for common problems related to configuration, models, and endpoint connectivity.
Symptoms:
- The app displays a message like:
π Sanctuary Renewal in Progress Our sacred space is currently in a brief period of restoration...
- The application seems to be in a "restoration" state longer than expected.
Possible Causes:
- The Hugging Face inference endpoint is scaled to zero and is cold-starting.
- Network latency or temporary API unavailability.
Solutions:
- Check the endpoint status in your Hugging Face Dashboard.
- Ensure that the endpoints are configured to maintain at least one active replica to reduce cold-start delays.
- Wait 2-3 minutes for the service to restore and retry.
Symptoms:
- Errors indicating "401 Unauthorized" or "Token Invalid."
- Models fail to respond, and the app remains non-functional.
Possible Causes:
- The Hugging Face API token is missing, expired, or lacks necessary permissions.
Solutions:
- Generate a valid API token from Hugging Face Settings > Access Tokens.
- Ensure the token has read and write permissions for inference endpoints.
- Add the token to your
.env
file:HUGGINGFACE_API_TOKEN=<your-valid-api-token>
Symptoms:
- Models fail to respond, or the app crashes when trying to analyze input or generate responses.
Possible Causes:
- The
CHAT_MODEL_ENDPOINT
orSENTIMENT_MODEL_ENDPOINT
in your.env
file is incorrect or points to an inactive model.
Solutions:
- Verify the URLs in the
.env
file match the endpoint URLs in your Hugging Face dashboard. - Ensure the endpoints are deployed and active.
Symptoms:
- Long delays when interacting with the app.
- Errors related to endpoint unavailability.
Possible Causes:
- The endpoint is scaled to zero replicas to save costs.
Solutions:
- Increase the minimum replica count for the endpoint in the Hugging Face dashboard to avoid cold starts.
- Monitor endpoint usage to optimize performance vs. cost.
Symptoms:
- Errors indicating "429 Too Many Requests."
- Delayed or missing responses from the models.
Possible Causes:
- Your Hugging Face account's usage limits have been exceeded.
Solutions:
- Upgrade your Hugging Face plan or add additional credits to your account.
- Use smaller input sizes to reduce token usage.
Symptoms:
- Errors such as "403 Forbidden" or "Access Denied."
Possible Causes:
- The API token does not have sufficient permissions.
Solutions:
- Ensure the token is created with proper permissions:
- read: To fetch models and endpoints.
- write: To manage endpoint interactions.
- Update the
.env
file with the correct token.
If none of the above solutions resolve the issue:
-
Check Application Logs:
- Logs are stored in
logs/phoenix.log
. Look for specific errors or stack traces.
- Logs are stored in
-
Verify Environment Variables:
- Run:
Ensure all variables are set correctly.
cat .env
- Run:
-
Restart the App:
- Stop and restart the app to refresh all configurations:
poetry run python src/app.py
- Stop and restart the app to refresh all configurations:
-
Contact Support:
- If issues persist, refer to the Hugging Face Support Documentation or raise an issue in the project repository.
- Always keep a backup of your
.env
file and ensure it contains valid tokens and endpoints. - Regularly monitor the status of your Hugging Face endpoints.
- Configure models with appropriate replica counts to avoid cold starts.
Contributions to enhance functionality and maintainability are welcome:
- Fork the repository.
- Create a feature branch:
git checkout -b feature-name
. - Commit your changes:
git commit -m "Add feature description"
. - Push to your branch:
git push origin feature-name
. - Submit a pull request for review.
Ericson Willians
GitHub | LinkedIn
This project is licensed under the MIT License.