Skip to content

πŸ‘¨πŸ»β€πŸ’» Meet Lumina – my personal chatbot assistant designed to answer any questions. Powered by Optuna, RAG, LangChain, Llama3, LoRA optimization, and Pinecone, Lumina offers friendly support and smart solutions tailored for all conversations. Created as part of the mid-term project for COMP-488 at UNC.

License

Notifications You must be signed in to change notification settings

hoangsonww/AI-Assistant-Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

34 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

My Personal AI Assistant Project - Lumina πŸš—

David Nguyen's Personal AI Assistant - Lumina is a full-stack web application that allows users to ask questions about David Nguyen, as well as any other topics, and receive instant, personalized responses powered by state‑of‑the‑art AI. Users can log in to save their conversation history or continue as guests. The app uses modern technologies and provides a sleek, responsive user interface with animations.

Lumina Logo

Table of Contents

Live App

Currently, the app is deployed live on Vercel at: https://lumina-david.vercel.app. Feel free to check it out!

For the backend, it is deployed live also on Vercel at: https://ai-assistant-chatbot-server.vercel.app/.

Alternatively, the backup app is deployed live on Netlify at: https://lumina-ai-chatbot.netlify.app/.

Key Technologies

Node.js Express OpenAI TypeScript React MongoDB Pinecone Material UI JWT Vercel Netlify Swagger LangChain Python Docker Jupyter Notebook Git

Features

  • AI Chatbot: Ask questions about David Nguyen and general topics; receive responses from an AI.
  • User Authentication: Sign up, log in, and log out using JWT authentication.
  • Conversation History: Save, retrieve, rename, and search past conversations (only for authenticated users).
  • Reset Password: Verify email and reset a user’s password.
  • Responsive UI: Built with React and Material‑UI (MUI) with a fully responsive, modern, and animated interface.
  • Landing Page: A dynamic landing page with animations, feature cards, and call-to-action buttons.
  • Guest Mode: Users may interact with the AI assistant as a guest, though conversations will not be saved.
  • Dark/Light Mode: Users can toggle between dark and light themes, with the preference stored in local storage.

Architecture

The project is divided into two main parts:

  • Backend:
    An Express server written in TypeScript. It provides endpoints for:

    • User authentication (signup, login).
    • Conversation management (create, load, update, and search conversations).
    • AI chat integration (simulated calls to external generative AI APIs).
    • Additional endpoints for email verification and password reset.
    • MongoDB is used for data storage, with Mongoose for object modeling.
  • Frontend:
    A React application built with TypeScript and Material‑UI (MUI). It includes:

    • A modern, animated user interface for chatting with the AI.
    • A landing page showcasing the app’s features.
    • Pages for login, signup, and password reset.
    • A collapsible sidebar for conversation history.
    • Theme toggling (dark/light mode) and responsive design.
  • AI/ML: Use RAG (Retrieval-Augmented Generation) & LangChain to enhance the AI's responses by retrieving relevant information from a knowledge base or external sources. This involves:

    • Retrieval: Implement a retrieval mechanism to fetch relevant documents or data from a knowledge base or external sources.
    • Augmentation: Combine the retrieved information with the user's query to provide a more informed response.
    • Generation: Use a generative model to create a response based on the augmented input.
    • Feedback Loop: Implement a feedback loop to continuously improve the system based on user interactions and feedback.
    • LangChain: Use LangChain to manage the entire process, from retrieval to generation, ensuring a seamless integration of RAG into the chatbot's workflow.
    • Pinecone: Use Pinecone for vector similarity search to efficiently retrieve relevant documents or data for the RAG model.

Technologies Used

  • Backend:

    • Node.js & Express
    • TypeScript
    • MongoDB (with Mongoose)
    • JWT for authentication
    • Additional packages: bcrypt, cors, dotenv, multer, nodemailer, openai, uuid, etc.
    • Development tools: nodemon, ts-node
  • Frontend:

    • React with TypeScript
    • Material‑UI (MUI) for UI components
    • React Router for navigation
    • Axios for API requests
    • React Markdown for rendering AI-generated markdown text

User Interface

Landing Page

Landing Page

Homepage

Homepage

Homepage - Dark Mode

Homepage - Dark Mode

Login Page

Login Page

Signup Page

Signup Page

Reset Password Page

Reset Password Page

Homepage - Unauthenticated User

Homepage - Unauthenticated User

404 Page

404 Page

Setup & Installation

Backend Setup

  1. Clone the repository:

    git clone https://github.com/hoangsonww/AI-Assistant-Chatbot.git
    cd AI-Assistant-Chatbot/server
  2. Install dependencies:

    npm install
  3. Environment Variables:
    Create a .env file in the server folder with the following (adjust values as needed):

    PORT=5000
    MONGODB_URI=mongodb://localhost:27017/ai-assistant
    JWT_SECRET=your_jwt_secret_here
    GOOGLE_AI_API_KEY=your_google_ai_api_key_here
    AI_INSTRUCTIONS=Your system instructions for the AI assistant
    PINECONE_API_KEY=your_pinecone_api_key_here
    PINECONE_INDEX_NAME=your_pinecone_index_name_here
  4. Run the server in development mode:

    npm run dev

    This uses nodemon with ts-node to watch for file changes.

Frontend Setup

  1. Navigate to the client folder:

    cd ../client
  2. Install dependencies:

    npm install
  3. Run the frontend development server:

    npm start

    The app will run on http://localhost:3000.

AI/ML Setup

  1. Install necessary Node.js packages:

    npm install
  2. Store knowledge data in Pinecone vector database:

    npm run store

    Or

    ts-node server/src/scripts/storeKnowledge.ts
  3. Ensure you run this command before starting the backend server to store the knowledge data in the Pinecone vector database.

Deployment

  • Backend:
    Deploy the backend to your preferred Node.js hosting service (Heroku, AWS, etc.). Make sure to set your environment variables on the hosting platform.

  • Frontend:
    Deploy the React frontend using services like Vercel, Netlify, or GitHub Pages. Update API endpoint URLs in the frontend accordingly.

Usage

  • Landing Page:
    The landing page provides an overview of the app’s features and two main actions: Create Account (for new users) and Continue as Guest.

  • Authentication:
    Users can sign up, log in, and reset their password. Authenticated users can save and manage their conversation history.

  • Chatting:
    The main chat area allows users to interact with the AI assistant. The sidebar displays saved conversations (for logged-in users) and allows renaming and searching.

  • Theme:
    Toggle between dark and light mode via the navbar. The chosen theme is saved in local storage and persists across sessions.

API Endpoints

Authentication

  • POST /api/auth/signup: Create a new user.
  • POST /api/auth/login: Authenticate a user and return a JWT.
  • GET /api/auth/verify-email?email=example@example.com: Check if an email exists.
  • POST /api/auth/reset-password: Reset a user's password.

Conversations

  • POST /api/conversations: Create a new conversation.
  • GET /api/conversations: Get all conversations for a user.
  • GET /api/conversations/:id: Retrieve a conversation by ID.
  • PUT /api/conversations/:id: Rename a conversation.
  • GET /api/conversations/search/:query: Search for conversations by title or message content.
  • DELETE /api/conversations/:id: Delete a conversation.

Chat

  • POST /api/chat: Process a chat query and return an AI-generated response.

Swagger API Documentation

Swagger API Documentation

Project Structure

AI-Assistant-Chatbot/
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ package.json
β”œβ”€β”€ tsconfig.json
β”œβ”€β”€ client/                         # Frontend React application
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ tsconfig.json
β”‚   β”œβ”€β”€ docker-compose.yml
β”‚   β”œβ”€β”€ Dockerfile
β”‚   └── src/
β”‚       β”œβ”€β”€ App.tsx
β”‚       β”œβ”€β”€ index.tsx
β”‚       β”œβ”€β”€ theme.ts
β”‚       β”œβ”€β”€ dev/
β”‚       β”‚   β”œβ”€β”€ palette.tsx
β”‚       β”‚   β”œβ”€β”€ previews.tsx
β”‚       β”‚   β”œβ”€β”€ index.ts
β”‚       β”‚   └── useInitial.ts
β”‚       β”œβ”€β”€ services/
β”‚       β”‚   └── api.ts
β”‚       β”œβ”€β”€ types/
β”‚       β”‚   β”œβ”€β”€ conversation.d.ts
β”‚       β”‚   └── user.d.ts
β”‚       β”œβ”€β”€ components/
β”‚       β”‚   β”œβ”€β”€ Navbar.tsx
β”‚       β”‚   β”œβ”€β”€ Sidebar.tsx
β”‚       β”‚   └── ChatArea.tsx
β”‚       └── pages/
β”‚           β”œβ”€β”€ LandingPage.tsx
β”‚           β”œβ”€β”€ Home.tsx
β”‚           β”œβ”€β”€ Login.tsx
β”‚           β”œβ”€β”€ Signup.tsx
β”‚           β”œβ”€β”€ NotFoundPage.tsx
β”‚           └── ForgotPassword.tsx
└── server/                         # Backend Express application
    β”œβ”€β”€ package.json
    β”œβ”€β”€ tsconfig.json
    β”œβ”€β”€ Dockerfile
    β”œβ”€β”€ docker-compose.yml
    └── src/
        β”œβ”€β”€ server.ts
        β”œβ”€β”€ models/
        β”‚   β”œβ”€β”€ Conversation.ts
        β”‚   └── User.ts
        β”œβ”€β”€ routes/
        β”‚   β”œβ”€β”€ auth.ts
        β”‚   β”œβ”€β”€ conversations.ts
        β”‚   └── chat.ts
        β”œβ”€β”€ services/
        β”‚   └── authService.ts
        β”œβ”€β”€ utils/
        β”‚   └── ephemeralConversations.ts
        └── middleware/
            └── auth.ts

Dockerization

To run the application using Docker, simply run docker-compose up in the root directory of the project. This will start both the backend and frontend services as defined in the docker-compose.yml file.

Why Dockerize?

  • Consistency: Ensures the application runs the same way in different environments.
  • Isolation: Keeps dependencies and configurations contained.
  • Scalability: Makes it easier to scale services independently.
  • Simplified Deployment: Streamlines the deployment process.
  • Easier Collaboration: Provides a consistent environment for all developers.

OpenAPI Specification

There is an OpenAPI specification file (openapi.yaml) in the root directory that describes the API endpoints, request/response formats, and authentication methods. This can be used to generate client SDKs or documentation.

To view the API documentation, you can use tools like Swagger UI or Postman to import the openapi.yaml file. Or just go to the /docs endpoint of the deployed backend.

Contributing

  1. Fork the repository.
  2. Create your feature branch: git checkout -b feature/your-feature-name
  3. Commit your changes: git commit -m 'Add some feature'
  4. Push to the branch: git push origin feature/your-feature-name
  5. Open a Pull Request.

License

This project is licensed under the MIT License.


Thank you for checking out the AI Assistant Project! If you have any questions or feedback, feel free to reach out. Happy coding! πŸš—

⬆️ Back to Top

About

πŸ‘¨πŸ»β€πŸ’» Meet Lumina – my personal chatbot assistant designed to answer any questions. Powered by Optuna, RAG, LangChain, Llama3, LoRA optimization, and Pinecone, Lumina offers friendly support and smart solutions tailored for all conversations. Created as part of the mid-term project for COMP-488 at UNC.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published