@aigne/open-router
AIGNE OpenRouter SDK for accessing multiple AI models through a unified API within the AIGNE Framework.
Introduction#
@aigne/open-router provides a seamless integration between the AIGNE Framework and OpenRouter's unified API for accessing a wide variety of AI models. This package enables developers to easily leverage models from multiple providers (including OpenAI, Anthropic, Google, and more) through a single consistent interface, allowing for flexible model selection and fallback options.
Features#
- OpenRouter API Integration: Direct connection to OpenRouter's API services
- Multi-Provider Access: Access to models from OpenAI, Anthropic, Claude, Google, and many other providers
- Unified Interface: Consistent interface for all models regardless of their origin
- Model Fallbacks: Easily configure fallback options between different models
- Chat Completions: Support for chat completions API with all available models
- Streaming Responses: Support for streaming responses for more responsive applications
- Type-Safe: Comprehensive TypeScript typings for all APIs and models
- Consistent Interface: Compatible with the AIGNE Framework's model interface
- Error Handling: Robust error handling and retry mechanisms
- Full Configuration: Extensive configuration options for fine-tuning behavior
Installation#
Using npm#
npm install @aigne/open-router @aigne/core
Using yarn#
yarn add @aigne/open-router @aigne/core
Using pnpm#
pnpm add @aigne/open-router @aigne/core
Basic Usage#
import { OpenRouterChatModel } from "@aigne/open-router";
const model = new OpenRouterChatModel({
// Provide API key directly or use environment variable OPEN_ROUTER_API_KEY
apiKey: "your-api-key", // Optional if set in env variables
// Specify model (defaults to 'openai/gpt-4o')
model: "anthropic/claude-3-opus",
modelOptions: {
temperature: 0.7,
},
});
const result = await model.invoke({
messages: [{ role: "user", content: "Which model are you using?" }],
});
console.log(result);
/* Output:
{
text: "I'm powered by OpenRouter, using the Claude 3 Opus model from Anthropic.",
model: "anthropic/claude-3-opus",
usage: {
inputTokens: 5,
outputTokens: 14
}
}
*/
Using Multiple Models with Fallbacks#
const modelWithFallbacks = new OpenRouterChatModel({
apiKey: "your-api-key",
model: "openai/gpt-4o",
fallbackModels: ["anthropic/claude-3-opus", "google/gemini-1.5-pro"], // Fallback order
modelOptions: {
temperature: 0.7,
},
});
// Will try gpt-4o first, then claude-3-opus if that fails, then gemini-1.5-pro
const fallbackResult = await modelWithFallbacks.invoke({
messages: [{ role: "user", content: "Which model are you using?" }],
});
Streaming Responses#
import { OpenRouterChatModel } from "@aigne/open-router";
const model = new OpenRouterChatModel({
apiKey: "your-api-key",
model: "anthropic/claude-3-opus",
});
const stream = await model.invoke(
{
messages: [{ role: "user", content: "Which model are you using?" }],
},
undefined,
{ streaming: true },
);
let fullText = "";
const json = {};
for await (const chunk of stream) {
const text = chunk.delta.text?.text;
if (text) fullText += text;
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
}
console.log(fullText); // Output: "I'm powered by OpenRouter, using the Claude 3 Opus model from Anthropic."
console.log(json); // { model: "anthropic/claude-3-opus", usage: { inputTokens: 5, outputTokens: 14 } }
License#
Elastic-2.0
Classes#
OpenRouterChatModel#
Implementation of the ChatModel interface for OpenRouter service
OpenRouter provides access to a variety of large language models through a unified API. This implementation uses the OpenAI-compatible interface to connect to OpenRouter's service.
Default model: 'openai/gpt-4o'
Examples#
Here's how to create and use an OpenRouter chat model:
const model = new OpenRouterChatModel({
// Provide API key directly or use environment variable OPEN_ROUTER_API_KEY
apiKey: "your-api-key", // Optional if set in env variables
// Specify model (defaults to 'openai/gpt-4o')
model: "anthropic/claude-3-opus",
modelOptions: {
temperature: 0.7,
},
});
const result = await model.invoke({
messages: [{ role: "user", content: "Which model are you using?" }],
});
console.log(result);
/* Output:
{
text: "I'm powered by OpenRouter, using the Claude 3 Opus model from Anthropic.",
model: "anthropic/claude-3-opus",
usage: {
inputTokens: 5,
outputTokens: 14
}
}
*/
Here's an example with streaming response:
const model = new OpenRouterChatModel({
apiKey: "your-api-key",
model: "anthropic/claude-3-opus",
});
const stream = await model.invoke(
{
messages: [{ role: "user", content: "Which model are you using?" }],
},
{ streaming: true },
);
let fullText = "";
const json = {};
for await (const chunk of stream) {
const text = chunk.delta.text?.text;
if (text) fullText += text;
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
}
console.log(fullText); // Output: "I'm powered by OpenRouter, using the Claude 3 Opus model from Anthropic."
console.log(json); // { model: "anthropic/claude-3-opus", usage: { inputTokens: 5, outputTokens: 14 } }
Extends#
OpenAIChatModel
Indexable#
[key: symbol]: () => string | () => Promise<void>
Constructors#
Constructor#
new OpenRouterChatModel(
options?):OpenRouterChatModel
Parameters#
Parameter | Type |
|---|---|
|
|
Returns#
Overrides#
OpenAIChatModel.constructor
Properties#
apiKeyEnvName#
protectedapiKeyEnvName:string="OPEN_ROUTER_API_KEY"
Overrides#
OpenAIChatModel.apiKeyEnvName
supportsParallelToolCalls#
protectedsupportsParallelToolCalls:boolean=false
Overrides#
OpenAIChatModel.supportsParallelToolCalls