Skip to content

simple llm function calling with Gemini Multimodal Live, multiple llm prompts (parallel pipecat pipelines), and a smidge of whimsy.

Notifications You must be signed in to change notification settings

vipyne/annie-hall-weather-bot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

annie hall weather bot

This repository demonstrates simple llm function calling with Gemini Multimodal Live and parallel Pipecat pipelines. Concept inspired by the "honest subtitles" scene*. Implementation heavily influenced by simple chatbot and this.

Quick Start

  1. Copy env.example to .env and add API keys

    cp example.env .env
  2. Setup server

    cd server
    python3 -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    pip install -r requirements.txt
  3. Start the server

    python server.py
  4. Setup client

    cd client/javascript
    npm install
  5. Start the client

    npm run dev
  6. Navigate to http://localhost:5173 in your browser and click "Talk to annie hall weather bot".

Server Overview

Client Overview

Important Note

The bot server must be running for any of the client implementations to work. Start the server first before trying any of the client apps.

Requirements

  • Python 3.10+
  • Node.js 16+
  • Daily API key (free)
  • Gemini API key (free)
  • Modern web browser with WebRTC support

* the content can be problematic, but the concept is solid.

About

simple llm function calling with Gemini Multimodal Live, multiple llm prompts (parallel pipecat pipelines), and a smidge of whimsy.

Topics

Resources

Stars

Watchers

Forks