Skip to content

An open-source Swift package for interacting with different LLM's public API's, using the OpenAI format.

License

Notifications You must be signed in to change notification settings

jamesrochabrun/PolyAI

Repository files navigation

PolyAI

dragon

iOS 15+ MIT license swift-version swiftui-version xcode-version swift-package-manager Buy me a coffee

An open-source Swift package that simplifies LLM message completions, designed for multi-model applications. It supports multiple providers while adhering to OpenAI-compatible APIs and Anthropic APIs, enabling Swift developers to integrate different AI models seamlessly.

Description

OpenAI Compatibility

Easily call various LLM APIs using the OpenAI format, with built-in support for multiple models and providers through the SwiftOpenAI package.

Supported Providers:

  • OpenAI
  • Azure
  • Groq
  • DeepSeek
  • Google Gemini
  • OpenRouter
  • Ollama

Note: When using OpenAI-compatible configurations, you can identify them by the .openAI enum prefix in the configuration structure.

Example:

.openAI(.gemini(apiKey: "your_gemini_api_key_here"))

Anthropic

Additionally, Anthropic API is supported through the SwiftAnthropic

Table of Contents

Installation

Swift Package Manager

  1. Open your Swift project in Xcode.
  2. Go to File -> Add Package Dependency.
  3. In the search bar, enter this URL.
  4. Choose the version you'd like to install.
  5. Click Add Package.

Important

⚠️ Please take precautions to keep your API keys secure.

Remember that your API keys are a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your backend server where your API keys can be securely loaded from an environment variable or key management service.

Functionalities

  • Chat completions
  • Chat completions with stream
  • Tool use
  • Image as input

Usage

To interface with different LLMs, you need only to supply the corresponding LLM configuration and adjust the parameters accordingly.

First, import the PolyAI package:

import PolyAI

Then, define the LLM configurations.

Currently, the package supports OpenAI, Azure, Anthropic, Gemini, Groq, DeepSeek, and OpenRouter. Additionally, you can use Ollama to run local models like Llama 3 or Mistral through OpenAI-compatible endpoints.

// OpenAI
let openAIConfiguration: LLMConfiguration = .openAI(.api(key: "your_openai_api_key_here")

// Gemini
let geminiConfiguration: LLMConfiguration = .openAI(.gemini(apiKey: "your_gemini_api_key_here"))

// Groq
let groqConfiguration: LLMConfiguration = .openAI(.groq(apiKey: "your_groq_api_key_here"))

// Ollama
let ollamaConfiguration: LLMConfiguration = .openAI(.ollama(url: "http://localhost:11434"))

// OpenRouter
let openRouterConfiguration: LLMConfiguration = .openAI(.openRouter(apiKey: "your_open-router_api_key_here"))

// DeepSeek
let deepSeekConfiguration: LLMConfiguration = .openAI(.deepSeek(apiKey: "your_deepseek_api_key_here"))

// Anthropic
let anthropicConfiguration: LLMConfiguration = .anthropic(apiKey: "your_anthropic_api_key_here")

let configurations = [openAIConfiguration, anthropicConfiguration, geminiConfiguration, ollamaConfiguration]

With the configurations set, initialize the service:

let service = PolyAIServiceFactory.serviceWith(configurations)

Now, you have access to all the models offered by these providers in a single package. πŸš€

Message

To send a message using OpenAI:

let prompt = "How are you today?"
let parameters: LLMParameter = .openAI(model: .o1Preview, messages: [.init(role: .user, content: prompt)])
let stream = try await service.streamMessage(parameters)

To interact with Anthropic instead, all you need to do is change just one line of code! πŸ”₯

let prompt = "How are you today?"
let parameters: LLMParameter = .anthropic(model: .claude3Sonnet, messages: [.init(role: .user, content: prompt)], maxTokens: 1024)
let stream = try await service.streamMessage(parameters)

To interact with Gemini instead, all you need to do (again) is change just one line of code! πŸ”₯

let prompt = "How are you today?"
let parameters: LLMParameter = .gemini(model: ""gemini-1.5-pro-latest", messages: [.init(role: .user, content: prompt)], maxTokens: 2000)
let stream = try await service.streamMessage(parameters)

To interact with local models using Ollama, all you need to do(again) is change just one line of code! πŸ”₯

let prompt = "How are you today?"
let parameters: LLMParameter = .ollama(model: "llama3", messages: [.init(role: .user, content: prompt)], maxTokens: 2000)
let stream = try await service.streamMessage(parameters)

As demonstrated, simply switch the LLMParameter to the desired provider.

OpenAI Azure

To access the OpenAI API via Azure, you can use the following configuration setup.

let azureConfiguration: LLMConfiguration = .openAI(.azure(configuration: .init(resourceName: "YOUR_RESOURCE_NAME", openAIAPIKey: .apiKey("YOUR_API_KEY"), apiVersion: "THE_API_VERSION")))

More information can be found here.

Groq

To access Groq, use the following configuration setup.

let groqConfiguration: LLMConfiguration = .openAI(.groq(apiKey: "your_groq_api_key_here"))

More information can be found here.

OpenRouter

To access OpenRouter, use the following configuration setup.

let openRouterConfiguration: LLMConfiguration = .openAI(.openRouter(apiKey: "your_open-router_api_key_here"))

More information can be found here.

DeepSeek

To access DeepSeek, use the following configuration setup.

let deepSeekConfiguration: LLMConfiguration = .openAI(.deepSeek(apiKey: "your_deepseek_api_key_here"))

More information can be found here.

OpenAI AIProxy

To access the OpenAI API via AIProxy, use the following configuration setup.

let aiProxyConfiguration: LLMConfiguration = .openAI(.aiProxy(aiproxyPartialKey: "hardcode_partial_key_here", aiproxyDeviceCheckBypass: "hardcode_device_check_bypass_here"))

More information can be found here.

Ollama

To interact with local models using Ollama OpenAI compatibility endpoints, use the following configuration setup.

1 - Download Ollama if yo don't have it installed already. 2 - Download the model you need, e.g for llama3 type in terminal:

ollama pull llama3

Once you have the model installed locally you are ready to use PolyAI!

let ollamaConfiguration: LLMConfiguration = .ollama(url: "http://localhost:11434")

More information can be found here.

Collaboration

Open a PR for any proposed change pointing it to main branch.

About

An open-source Swift package for interacting with different LLM's public API's, using the OpenAI format.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages