Used to check for browser translation.
用于检测浏览器翻译。
ブラウザの翻訳を検出する

@aigne/ollama


GitHub star chart
Open Issues
NPM Version
Elastic-2.0 licensed

AIGNE Ollama SDK for integrating with locally hosted AI models via Ollama within the AIGNE Framework.

Introduction#

@aigne/ollama provides a seamless integration between the AIGNE Framework and locally hosted AI models via Ollama. This package enables developers to easily leverage open-source language models running locally through Ollama in their AIGNE applications, providing a consistent interface across the framework while offering private, offline access to AI capabilities.

Features#

  • Ollama Integration: Direct connection to a local Ollama instance
  • Local Model Support: Support for a wide variety of open-source models hosted via Ollama
  • Chat Completions: Support for chat completions API with all available Ollama models
  • Streaming Responses: Support for streaming responses for more responsive applications
  • Type-Safe: Comprehensive TypeScript typings for all APIs and models
  • Consistent Interface: Compatible with the AIGNE Framework's model interface
  • Privacy-Focused: Run models locally without sending data to external API services
  • Full Configuration: Extensive configuration options for fine-tuning behavior

Installation#

Using npm#

npm install @aigne/ollama @aigne/core

Using yarn#

yarn add @aigne/ollama @aigne/core

Using pnpm#

pnpm add @aigne/ollama @aigne/core

Prerequisites#

Before using this package, you need to have Ollama installed and running on your machine with at least one model pulled. Follow the instructions on the Ollama website to set up Ollama.

Basic Usage#

import { OllamaChatModel } from "@aigne/ollama";

const model = new OllamaChatModel({
// Specify base URL (defaults to http://localhost:11434)
baseURL: "http://localhost:11434",
// Specify Ollama model to use (defaults to 'llama3')
model: "llama3",
modelOptions: {
temperature: 0.8,
},
});

const result = await model.invoke({
messages: [{ role: "user", content: "Tell me what model you're using" }],
});

console.log(result);
/* Output:
{
text: "I'm an AI assistant running on Ollama with the llama3 model.",
model: "llama3"
}
*/

Streaming Responses#

import { OllamaChatModel } from "@aigne/ollama";

const model = new OllamaChatModel({
baseURL: "http://localhost:11434",
model: "llama3",
});

const stream = await model.invoke(
{
messages: [{ role: "user", content: "Tell me what model you're using" }],
},
undefined,
{ streaming: true },
);

let fullText = "";
const json = {};

for await (const chunk of stream) {
const text = chunk.delta.text?.text;
if (text) fullText += text;
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
}

console.log(fullText); // Output: "I'm an AI assistant running on Ollama with the llama3 model."
console.log(json); // { model: "llama3" }

License#

Elastic-2.0

Classes#

OllamaChatModel#

Implementation of the ChatModel interface for Ollama

This model allows you to run open-source LLMs locally using Ollama, with an OpenAI-compatible API interface.

Default model: 'llama3.2'

Examples#

Here's how to create and use an Ollama chat model:

const model = new OllamaChatModel({
// Specify base URL (defaults to http://localhost:11434)
baseURL: "http://localhost:11434",
// Specify Ollama model to use (defaults to 'llama3')
model: "llama3",
modelOptions: {
temperature: 0.8,
},
});

const result = await model.invoke({
messages: [{ role: "user", content: "Tell me what model you're using" }],
});

console.log(result);
/* Output:
{
text: "I'm an AI assistant running on Ollama with the llama3 model.",
model: "llama3"
}
*/

Here's an example with streaming response:

const model = new OllamaChatModel({
baseURL: "http://localhost:11434",
model: "llama3",
});

const stream = await model.invoke(
{
messages: [{ role: "user", content: "Tell me what model you're using" }],
},
{ streaming: true },
);

let fullText = "";
const json = {};

for await (const chunk of stream) {
const text = chunk.delta.text?.text;
if (text) fullText += text;
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
}

console.log(fullText); // Output: "I'm an AI assistant running on Ollama with the llama3 model."
console.log(json); // { model: "llama3" }

Extends#

  • OpenAIChatModel

Indexable#

[key: symbol]: () => string | () => Promise<void>

Constructors#

Constructor#

new OllamaChatModel(options?): OllamaChatModel

Parameters#

Parameter

Type

options?

OpenAIChatModelOptions

Returns#

OllamaChatModel

Overrides#

OpenAIChatModel.constructor

Properties#

apiKeyEnvName#

protected apiKeyEnvName: string = "OLLAMA_API_KEY"

Overrides#

OpenAIChatModel.apiKeyEnvName

apiKeyDefault#

protected apiKeyDefault: string = "ollama"

Overrides#

OpenAIChatModel.apiKeyDefault

You got 0 point(s)