David Nguyen's Personal AI Assistant - Lumina is a full-stack web application that allows users to ask questions about David Nguyen, as well as any other topics, and receive instant, personalized responses powered by stateβofβtheβart AI. Users can log in to save their conversation history or continue as guests. The app uses modern technologies and provides a sleek, responsive user interface with animations.
- Live App
- Features
- Architecture
- Technologies Used
- User Interface
- Setup & Installation
- Deployment
- Usage
- API Endpoints
- Project Structure
- Dockerization
- OpenAPI Specification
- Contributing
- License
Currently, the app is deployed live on Vercel at: https://lumina-david.vercel.app. Feel free to check it out!
For the backend, it is deployed live also on Vercel at: https://ai-assistant-chatbot-server.vercel.app/.
Alternatively, the backup app is deployed live on Netlify at: https://lumina-ai-chatbot.netlify.app/.
- AI Chatbot: Ask questions about David Nguyen and general topics; receive responses from an AI.
- User Authentication: Sign up, log in, and log out using JWT authentication.
- Conversation History: Save, retrieve, rename, and search past conversations (only for authenticated users).
- Reset Password: Verify email and reset a userβs password.
- Responsive UI: Built with React and MaterialβUI (MUI) with a fully responsive, modern, and animated interface.
- Landing Page: A dynamic landing page with animations, feature cards, and call-to-action buttons.
- Guest Mode: Users may interact with the AI assistant as a guest, though conversations will not be saved.
- Dark/Light Mode: Users can toggle between dark and light themes, with the preference stored in local storage.
The project is divided into two main parts:
-
Backend:
An Express server written in TypeScript. It provides endpoints for:- User authentication (signup, login).
- Conversation management (create, load, update, and search conversations).
- AI chat integration (simulated calls to external generative AI APIs).
- Additional endpoints for email verification and password reset.
- MongoDB is used for data storage, with Mongoose for object modeling.
-
Frontend:
A React application built with TypeScript and MaterialβUI (MUI). It includes:- A modern, animated user interface for chatting with the AI.
- A landing page showcasing the appβs features.
- Pages for login, signup, and password reset.
- A collapsible sidebar for conversation history.
- Theme toggling (dark/light mode) and responsive design.
-
AI/ML: Use RAG (Retrieval-Augmented Generation) & LangChain to enhance the AI's responses by retrieving relevant information from a knowledge base or external sources. This involves:
- Retrieval: Implement a retrieval mechanism to fetch relevant documents or data from a knowledge base or external sources.
- Augmentation: Combine the retrieved information with the user's query to provide a more informed response.
- Generation: Use a generative model to create a response based on the augmented input.
- Feedback Loop: Implement a feedback loop to continuously improve the system based on user interactions and feedback.
- LangChain: Use LangChain to manage the entire process, from retrieval to generation, ensuring a seamless integration of RAG into the chatbot's workflow.
- Pinecone: Use Pinecone for vector similarity search to efficiently retrieve relevant documents or data for the RAG model.
-
Backend:
- Node.js & Express
- TypeScript
- MongoDB (with Mongoose)
- JWT for authentication
- Additional packages: bcrypt, cors, dotenv, multer, nodemailer, openai, uuid, etc.
- Development tools: nodemon, ts-node
-
Frontend:
- React with TypeScript
- MaterialβUI (MUI) for UI components
- React Router for navigation
- Axios for API requests
- React Markdown for rendering AI-generated markdown text
-
Clone the repository:
git clone https://github.com/hoangsonww/AI-Assistant-Chatbot.git cd AI-Assistant-Chatbot/server
-
Install dependencies:
npm install
-
Environment Variables:
Create a.env
file in theserver
folder with the following (adjust values as needed):PORT=5000 MONGODB_URI=mongodb://localhost:27017/ai-assistant JWT_SECRET=your_jwt_secret_here GOOGLE_AI_API_KEY=your_google_ai_api_key_here AI_INSTRUCTIONS=Your system instructions for the AI assistant PINECONE_API_KEY=your_pinecone_api_key_here PINECONE_INDEX_NAME=your_pinecone_index_name_here
-
Run the server in development mode:
npm run dev
This uses nodemon with
ts-node
to watch for file changes.
-
Navigate to the client folder:
cd ../client
-
Install dependencies:
npm install
-
Run the frontend development server:
npm start
The app will run on http://localhost:3000.
-
Install necessary Node.js packages:
npm install
-
Store knowledge data in Pinecone vector database:
npm run store
Or
ts-node server/src/scripts/storeKnowledge.ts
-
Ensure you run this command before starting the backend server to store the knowledge data in the Pinecone vector database.
-
Backend:
Deploy the backend to your preferred Node.js hosting service (Heroku, AWS, etc.). Make sure to set your environment variables on the hosting platform. -
Frontend:
Deploy the React frontend using services like Vercel, Netlify, or GitHub Pages. Update API endpoint URLs in the frontend accordingly.
-
Landing Page:
The landing page provides an overview of the appβs features and two main actions: Create Account (for new users) and Continue as Guest. -
Authentication:
Users can sign up, log in, and reset their password. Authenticated users can save and manage their conversation history. -
Chatting:
The main chat area allows users to interact with the AI assistant. The sidebar displays saved conversations (for logged-in users) and allows renaming and searching. -
Theme:
Toggle between dark and light mode via the navbar. The chosen theme is saved in local storage and persists across sessions.
- POST /api/auth/signup: Create a new user.
- POST /api/auth/login: Authenticate a user and return a JWT.
- GET /api/auth/verify-email?email=example@example.com: Check if an email exists.
- POST /api/auth/reset-password: Reset a user's password.
- POST /api/conversations: Create a new conversation.
- GET /api/conversations: Get all conversations for a user.
- GET /api/conversations/:id: Retrieve a conversation by ID.
- PUT /api/conversations/:id: Rename a conversation.
- GET /api/conversations/search/:query: Search for conversations by title or message content.
- DELETE /api/conversations/:id: Delete a conversation.
- POST /api/chat: Process a chat query and return an AI-generated response.
AI-Assistant-Chatbot/
βββ docker-compose.yml
βββ package.json
βββ tsconfig.json
βββ client/ # Frontend React application
β βββ package.json
β βββ tsconfig.json
β βββ docker-compose.yml
β βββ Dockerfile
β βββ src/
β βββ App.tsx
β βββ index.tsx
β βββ theme.ts
β βββ dev/
β β βββ palette.tsx
β β βββ previews.tsx
β β βββ index.ts
β β βββ useInitial.ts
β βββ services/
β β βββ api.ts
β βββ types/
β β βββ conversation.d.ts
β β βββ user.d.ts
β βββ components/
β β βββ Navbar.tsx
β β βββ Sidebar.tsx
β β βββ ChatArea.tsx
β βββ pages/
β βββ LandingPage.tsx
β βββ Home.tsx
β βββ Login.tsx
β βββ Signup.tsx
β βββ NotFoundPage.tsx
β βββ ForgotPassword.tsx
βββ server/ # Backend Express application
βββ package.json
βββ tsconfig.json
βββ Dockerfile
βββ docker-compose.yml
βββ src/
βββ server.ts
βββ models/
β βββ Conversation.ts
β βββ User.ts
βββ routes/
β βββ auth.ts
β βββ conversations.ts
β βββ chat.ts
βββ services/
β βββ authService.ts
βββ utils/
β βββ ephemeralConversations.ts
βββ middleware/
βββ auth.ts
To run the application using Docker, simply run docker-compose up
in the root directory of the project. This will start both the backend and frontend services as defined in the docker-compose.yml
file.
Why Dockerize?
- Consistency: Ensures the application runs the same way in different environments.
- Isolation: Keeps dependencies and configurations contained.
- Scalability: Makes it easier to scale services independently.
- Simplified Deployment: Streamlines the deployment process.
- Easier Collaboration: Provides a consistent environment for all developers.
There is an OpenAPI specification file (openapi.yaml
) in the root directory that describes the API endpoints, request/response formats, and authentication methods. This can be used to generate client SDKs or documentation.
To view the API documentation, you can use tools like Swagger UI or Postman to import the openapi.yaml
file. Or just go to the /docs
endpoint of the deployed backend.
- Fork the repository.
- Create your feature branch:
git checkout -b feature/your-feature-name
- Commit your changes:
git commit -m 'Add some feature'
- Push to the branch:
git push origin feature/your-feature-name
- Open a Pull Request.
This project is licensed under the MIT License.
Thank you for checking out the AI Assistant Project! If you have any questions or feedback, feel free to reach out. Happy coding! π