Used to check for browser translation.
用于检测浏览器翻译。
ブラウザの翻訳を検出する

@aigne/openai


GitHub star chart
Open Issues
codecov
NPM Version
Elastic-2.0 licensed

AIGNE OpenAI SDK for integrating with OpenAI's GPT models and API services within the AIGNE Framework.

Introduction#

@aigne/openai provides a seamless integration between the AIGNE Framework and OpenAI's powerful language models and APIs. This package enables developers to easily leverage OpenAI's GPT models in their AIGNE applications, providing a consistent interface across the framework while taking advantage of OpenAI's advanced AI capabilities.

Features#

  • OpenAI API Integration: Direct connection to OpenAI's API services using the official SDK
  • Chat Completions: Support for OpenAI's chat completions API with all available models
  • Function Calling: Built-in support for OpenAI's function calling capability
  • Streaming Responses: Support for streaming responses for more responsive applications
  • Type-Safe: Comprehensive TypeScript typings for all APIs and models
  • Consistent Interface: Compatible with the AIGNE Framework's model interface
  • Error Handling: Robust error handling and retry mechanisms
  • Full Configuration: Extensive configuration options for fine-tuning behavior

Installation#

Using npm#

npm install @aigne/openai @aigne/core

Using yarn#

yarn add @aigne/openai @aigne/core

Using pnpm#

pnpm add @aigne/openai @aigne/core

Basic Usage#

import { OpenAIChatModel } from "@aigne/openai";

const model = new OpenAIChatModel({
// Provide API key directly or use environment variable OPENAI_API_KEY
apiKey: "your-api-key", // Optional if set in env variables
model: "gpt-4o", // Defaults to "gpt-4o-mini" if not specified
modelOptions: {
temperature: 0.7,
},
});

const result = await model.invoke({
messages: [{ role: "user", content: "Hello, who are you?" }],
});

console.log(result);
/* Output:
{
text: "Hello! How can I assist you today?",
model: "gpt-4o",
usage: {
inputTokens: 10,
outputTokens: 9
}
}
*/

Streaming Responses#

import { OpenAIChatModel } from "@aigne/openai";

const model = new OpenAIChatModel({
apiKey: "your-api-key",
model: "gpt-4o",
});

const stream = await model.invoke(
{
messages: [{ role: "user", content: "Hello, who are you?" }],
},
undefined,
{ streaming: true },
);

let fullText = "";
const json = {};

for await (const chunk of stream) {
const text = chunk.delta.text?.text;
if (text) fullText += text;
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
}

console.log(fullText); // Output: "Hello! How can I assist you today?"
console.log(json); // { model: "gpt-4o", usage: { inputTokens: 10, outputTokens: 9 } }

License#

Elastic-2.0

Classes#

OpenAIChatModel#

Implementation of the ChatModel interface for OpenAI's API

This model provides access to OpenAI's capabilities including:

  • Text generation
  • Tool use with parallel tool calls
  • JSON structured output
  • Image understanding

Default model: 'gpt-4o-mini'

Examples#

Here's how to create and use an OpenAI chat model:

const model = new OpenAIChatModel({
// Provide API key directly or use environment variable OPENAI_API_KEY
apiKey: "your-api-key", // Optional if set in env variables
model: "gpt-4o", // Defaults to "gpt-4o-mini" if not specified
modelOptions: {
temperature: 0.7,
},
});

const result = await model.invoke({
messages: [{ role: "user", content: "Hello, who are you?" }],
});

console.log(result);
/* Output:
{
text: "Hello! How can I assist you today?",
model: "gpt-4o",
usage: {
inputTokens: 10,
outputTokens: 9
}
}
*/

Here's an example with streaming response:

const model = new OpenAIChatModel({
apiKey: "your-api-key",
model: "gpt-4o",
});

const stream = await model.invoke(
{
messages: [{ role: "user", content: "Hello, who are you?" }],
},
{ streaming: true },
);

let fullText = "";
const json = {};

for await (const chunk of stream) {
const text = chunk.delta.text?.text;
if (text) fullText += text;
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
}

console.log(fullText); // Output: "Hello! How can I assist you today?"
console.log(json); // { model: "gpt-4o", usage: { inputTokens: 10, outputTokens: 9 } }

Extends#

  • ChatModel

Indexable#

[key: symbol]: () => string | () => Promise<void>

Constructors#

Constructor#

new OpenAIChatModel(options?): OpenAIChatModel

Parameters#

Parameter

Type

options?

OpenAIChatModelOptions

Returns#

OpenAIChatModel

Overrides#

ChatModel.constructor

Properties#

options?#

optional options: OpenAIChatModelOptions

apiKeyEnvName#

protected apiKeyEnvName: string = "OPENAI_API_KEY"

apiKeyDefault#

protected apiKeyDefault: undefined | string

supportsNativeStructuredOutputs#

protected supportsNativeStructuredOutputs: boolean = true

supportsEndWithSystemMessage#

protected supportsEndWithSystemMessage: boolean = true

supportsToolsUseWithJsonSchema#

protected supportsToolsUseWithJsonSchema: boolean = true

supportsParallelToolCalls#

protected supportsParallelToolCalls: boolean = true

Indicates whether the model supports parallel tool calls

Defaults to true, subclasses can override this property based on specific model capabilities

Overrides#

ChatModel.supportsParallelToolCalls

supportsToolsEmptyParameters#

protected supportsToolsEmptyParameters: boolean = true

supportsToolStreaming#

protected supportsToolStreaming: boolean = true

supportsTemperature#

protected supportsTemperature: boolean = true

Accessors#

client#
Get Signature#

get client(): OpenAI

Returns#

OpenAI

modelOptions#
Get Signature#

get modelOptions(): undefined | ChatModelOptions

Returns#

undefined | ChatModelOptions

Methods#

process()#

process(input): PromiseOrValue<`AgentProcessResult`<ChatModelOutput>>

Process the input and generate a response

Parameters#

Parameter

Type

Description

input

ChatModelInput

The input to process

Returns#

PromiseOrValue<`AgentProcessResult`<ChatModelOutput>>

The generated response

Overrides#

ChatModel.process

Interfaces#

OpenAIChatModelCapabilities#

Properties#

Property

Type

supportsNativeStructuredOutputs

boolean

supportsEndWithSystemMessage

boolean

supportsToolsUseWithJsonSchema

boolean

supportsParallelToolCalls

boolean

supportsToolsEmptyParameters

boolean

supportsToolStreaming

boolean

supportsTemperature

boolean


OpenAIChatModelOptions#

Configuration options for OpenAI Chat Model

Properties#

Property

Type

Description

apiKey?

string

API key for OpenAI API If not provided, will look for OPENAI_API_KEY in environment variables

baseURL?

string

Base URL for OpenAI API Useful for proxies or alternate endpoints

model?

string

OpenAI model to use Defaults to 'gpt-4o-mini'

modelOptions?

ChatModelOptions

Additional model options to control behavior

You got 0 point(s)