
An open-source Swift package that simplifies LLM message completions, designed for multi-model applications. It supports multiple providers while adhering to OpenAI-compatible APIs and Anthropic APIs, enabling Swift developers to integrate different AI models seamlessly.
Easily call various LLM APIs using the OpenAI format, with built-in support for multiple models and providers through the SwiftOpenAI package.
Supported Providers:
Note: When using OpenAI-compatible configurations, you can identify them by the .openAI enum prefix in the configuration structure.
Example:
.openAI(.gemini(apiKey: "your_gemini_api_key_here"))
Additionally, Anthropic API is supported through the SwiftAnthropic
- Installation
- Usage
- Message
- Collaboration
- OpenAI Azure
- Groq
- OpenRouter
- DeepSeek
- OpenAI AIProxy
- Ollama
- Open your Swift project in Xcode.
- Go to
File
->Add Package Dependency
. - In the search bar, enter this URL.
- Choose the version you'd like to install.
- Click
Add Package
.
Remember that your API keys are a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your backend server where your API keys can be securely loaded from an environment variable or key management service.
- Chat completions
- Chat completions with stream
- Tool use
- Image as input
To interface with different LLMs, you need only to supply the corresponding LLM configuration and adjust the parameters accordingly.
First, import the PolyAI package:
import PolyAI
Then, define the LLM configurations.
Currently, the package supports OpenAI, Azure, Anthropic, Gemini, Groq, DeepSeek, and OpenRouter. Additionally, you can use Ollama to run local models like Llama 3 or Mistral through OpenAI-compatible endpoints.
// OpenAI
let openAIConfiguration: LLMConfiguration = .openAI(.api(key: "your_openai_api_key_here")
// Gemini
let geminiConfiguration: LLMConfiguration = .openAI(.gemini(apiKey: "your_gemini_api_key_here"))
// Groq
let groqConfiguration: LLMConfiguration = .openAI(.groq(apiKey: "your_groq_api_key_here"))
// Ollama
let ollamaConfiguration: LLMConfiguration = .openAI(.ollama(url: "http://localhost:11434"))
// OpenRouter
let openRouterConfiguration: LLMConfiguration = .openAI(.openRouter(apiKey: "your_open-router_api_key_here"))
// DeepSeek
let deepSeekConfiguration: LLMConfiguration = .openAI(.deepSeek(apiKey: "your_deepseek_api_key_here"))
// Anthropic
let anthropicConfiguration: LLMConfiguration = .anthropic(apiKey: "your_anthropic_api_key_here")
let configurations = [openAIConfiguration, anthropicConfiguration, geminiConfiguration, ollamaConfiguration]
With the configurations set, initialize the service:
let service = PolyAIServiceFactory.serviceWith(configurations)
Now, you have access to all the models offered by these providers in a single package. π
To send a message using OpenAI:
let prompt = "How are you today?"
let parameters: LLMParameter = .openAI(model: .o1Preview, messages: [.init(role: .user, content: prompt)])
let stream = try await service.streamMessage(parameters)
To interact with Anthropic instead, all you need to do is change just one line of code! π₯
let prompt = "How are you today?"
let parameters: LLMParameter = .anthropic(model: .claude3Sonnet, messages: [.init(role: .user, content: prompt)], maxTokens: 1024)
let stream = try await service.streamMessage(parameters)
To interact with Gemini instead, all you need to do (again) is change just one line of code! π₯
let prompt = "How are you today?"
let parameters: LLMParameter = .gemini(model: ""gemini-1.5-pro-latest", messages: [.init(role: .user, content: prompt)], maxTokens: 2000)
let stream = try await service.streamMessage(parameters)
To interact with local models using Ollama, all you need to do(again) is change just one line of code! π₯
let prompt = "How are you today?"
let parameters: LLMParameter = .ollama(model: "llama3", messages: [.init(role: .user, content: prompt)], maxTokens: 2000)
let stream = try await service.streamMessage(parameters)
As demonstrated, simply switch the LLMParameter to the desired provider.
To access the OpenAI API via Azure, you can use the following configuration setup.
let azureConfiguration: LLMConfiguration = .openAI(.azure(configuration: .init(resourceName: "YOUR_RESOURCE_NAME", openAIAPIKey: .apiKey("YOUR_API_KEY"), apiVersion: "THE_API_VERSION")))
More information can be found here.
To access Groq, use the following configuration setup.
let groqConfiguration: LLMConfiguration = .openAI(.groq(apiKey: "your_groq_api_key_here"))
More information can be found here.
To access OpenRouter, use the following configuration setup.
let openRouterConfiguration: LLMConfiguration = .openAI(.openRouter(apiKey: "your_open-router_api_key_here"))
More information can be found here.
To access DeepSeek, use the following configuration setup.
let deepSeekConfiguration: LLMConfiguration = .openAI(.deepSeek(apiKey: "your_deepseek_api_key_here"))
More information can be found here.
To access the OpenAI API via AIProxy, use the following configuration setup.
let aiProxyConfiguration: LLMConfiguration = .openAI(.aiProxy(aiproxyPartialKey: "hardcode_partial_key_here", aiproxyDeviceCheckBypass: "hardcode_device_check_bypass_here"))
More information can be found here.
To interact with local models using Ollama OpenAI compatibility endpoints, use the following configuration setup.
1 - Download Ollama if yo don't have it installed already.
2 - Download the model you need, e.g for llama3
type in terminal:
ollama pull llama3
Once you have the model installed locally you are ready to use PolyAI!
let ollamaConfiguration: LLMConfiguration = .ollama(url: "http://localhost:11434")
More information can be found here.
Open a PR for any proposed change pointing it to main
branch.