If you're building AI-powered applications, you've probably noticed most frameworks assume you're comfortable with Python. AIGNE takes a different approach: it's built for TypeScript developers who want to create production-ready AI agents without leaving their native ecosystem.
This guide walks you through building your first AI agent with AIGNE, from installation to deployment. By the end, you'll understand how to create intelligent applications that leverage multiple LLM providers through a unified API.
What Makes AIGNE Different#
Before we get started, here's what sets AIGNE apart from other AI frameworks:
TypeScript-first design: Not a port from Python. Every API, type definition, and workflow is designed for TypeScript developers.
Multi-provider LLM support: Switch between OpenAI, Claude, Gemini, or Nova without rewriting your code. The unified API handles provider differences automatically.
Agentic File System (AFS): Built-in file operations designed specifically for AI agents. Your agents can read, write, and manage files with type safety.
Production-ready: No experimental flags or beta warnings. AIGNE ships with error handling, logging, and monitoring built in.
Prerequisites#
You'll need basic familiarity with:
- Node.js and npm (version 18 or higher)
- TypeScript fundamentals
- Async/await patterns
- API key management
You don't need prior AI or machine learning experience. If you can build a REST API in TypeScript, you can build AI agents with AIGNE.
Step 1: Installation and Setup#
Create a new project and install AIGNE:
mkdir my-aigne-agent
cd my-aigne-agent
npm init -y
npm install @aigne/core
npm install -D typescript @types/node
Initialize TypeScript:
npx tsc --init
Update your tsconfig.json to enable ES modules:
{
"compilerOptions": {
"target": "ES2022",
"module": "ES2022",
"moduleResolution": "node",
"esModuleInterop": true,
"strict": true,
"outDir": "./dist"
}
}
Add a build script to package.json:
{
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/index.js"
}
}
Step 2: Configure Your LLM Provider#
AIGNE supports multiple providers, but we'll start with OpenAI. Create a .env file:
OPENAI_API_KEY=your_api_key_here
Never commit this file. Add it to .gitignore:
echo ".env" >> .gitignore
To use a different provider, you can swap in Claude, Gemini, or Nova credentials. The AIGNE API remains identical across providers.
Step 3: Create Your First Agent#
Create src/index.ts:
import { Agent, createLLMClient } from '@aigne/core';
async function main() {
// Initialize LLM client
const llm = createLLMClient({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4'
});
// Create agent with system prompt
const agent = new Agent({
name: 'assistant',
llm,
systemPrompt: `You are a helpful coding assistant specializing in TypeScript.
Provide clear, concise answers with code examples when relevant.`
});
// Send a message
const response = await agent.chat('How do I use async/await in TypeScript?');
console.log(response.content);
}
main().catch(console.error);
Build and run:
npm run build
npm start
You should see a detailed explanation of async/await patterns in TypeScript. The agent maintains conversation context automatically.
Step 4: Add Conversation Memory#
Agents become more useful when they remember previous interactions:
import { Agent, createLLMClient, MemoryStore } from '@aigne/core';
async function main() {
const llm = createLLMClient({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4'
});
const memory = new MemoryStore();
const agent = new Agent({
name: 'assistant',
llm,
memory,
systemPrompt: 'You are a helpful coding assistant.'
});
// First message
await agent.chat('I need to build a REST API in Express');
// Second message - agent remembers context
const response = await agent.chat('How should I structure the routes?');
console.log(response.content);
}
main().catch(console.error);
The MemoryStore tracks conversation history. The agent references earlier messages when forming responses.
Step 5: Use the Agentic File System#
The AFS lets agents interact with files safely:
import { Agent, createLLMClient, AFS } from '@aigne/core';
async function main() {
const llm = createLLMClient({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4'
});
// Initialize AFS with workspace directory
const afs = new AFS('./workspace');
const agent = new Agent({
name: 'codewriter',
llm,
afs,
systemPrompt: 'You write TypeScript code to files using the AFS.'
});
await agent.chat(`Create a simple Express server in server.ts
with a health check endpoint`);
// Agent writes the file to ./workspace/server.ts
console.log('File created successfully');
}
main().catch(console.error);
The AFS provides type-safe file operations. Agents can't access files outside the designated workspace, preventing security issues.
Step 6: Implement Workflow Patterns#
AIGNE includes workflow patterns for complex tasks. Here's a sequential workflow:
import { Sequential, Agent, createLLMClient } from '@aigne/core';
async function main() {
const llm = createLLMClient({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4'
});
const researcher = new Agent({
name: 'researcher',
llm,
systemPrompt: 'Research topics and provide structured information.'
});
const writer = new Agent({
name: 'writer',
llm,
systemPrompt: 'Write clear, engaging content from research notes.'
});
// Sequential workflow: research then write
const workflow = new Sequential([researcher, writer]);
const result = await workflow.run('Write a blog post about TypeScript generics');
console.log(result.output);
}
main().catch(console.error);
Other workflow patterns include:
- Concurrent: Run multiple agents in parallel
- Router: Route tasks to specialized agents
- Handoff: Transfer context between agents
- Reflection: Self-evaluate and improve outputs
Step 7: Switch LLM Providers#
One of AIGNE's strengths is provider flexibility. Switching from OpenAI to Claude takes three lines:
const llm = createLLMClient({
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY,
model: 'claude-3-opus-20240229'
});
Your agent code remains unchanged. The unified API handles provider-specific differences like request formats, rate limits, and streaming.
Expected Results#
After following this guide, you should have:
- A working AIGNE development environment
- An agent that maintains conversation context
- File operations using the Agentic File System
- A multi-agent workflow using Sequential pattern
- Understanding of how to swap LLM providers
Your agents can now:
- Respond to conversational queries
- Remember previous interactions
- Read and write files safely
- Coordinate with other agents
- Switch between LLM providers without code changes
Next Steps#
Now that you have the basics, here are paths to explore:
Add tool use: Give agents access to external APIs, databases, or custom functions. AIGNE's tool system makes this straightforward.
Build group chat workflows: Multiple agents collaborate by discussing problems together, each contributing their expertise.
Implement code execution: The code execution workflow lets agents write and run code, then learn from the results.
Integrate with Blocklet Server: Deploy your agents as self-hosted Blocklets for production use. Full documentation at docs.arcblock.io.
Monitor and observe: Add AIGNE's built-in observability to track agent decisions, token usage, and performance metrics.
The AIGNE documentation includes detailed guides for each workflow pattern, advanced configurations, and production deployment strategies.
Why TypeScript for AI Agents#
Python dominates AI development, but TypeScript offers advantages for production applications:
Type safety catches errors: Your IDE flags mistakes before runtime. This matters more as agents become complex.
Better tooling: Auto-complete, refactoring, and debugging tools work better with TypeScript's type system.
Frontend integration: If you're building AI-powered web apps, sharing types between frontend and backend eliminates a whole class of bugs.
Team scalability: Large teams benefit from explicit interfaces and contracts that TypeScript enforces.
AIGNE brings these benefits to AI agent development without sacrificing the flexibility that makes AI applications powerful.
Common Patterns#
Here are patterns you'll use frequently:
Specialized agents: Create agents with narrow expertise. A "researcher" agent, a "writer" agent, and an "editor" agent working together produce better results than one generalist.
Workspace isolation: Give each agent its own AFS workspace. This prevents agents from accidentally modifying each other's files.
Streaming responses: For user-facing applications, stream agent responses token-by-token instead of waiting for completion:
const stream = await agent.chatStream('Explain quantum computing');
for await (const chunk of stream) {
process.stdout.write(chunk.content);
}
Error recovery: Agents fail. Network issues, rate limits, and invalid responses happen. Wrap agent calls in try-catch blocks and implement retry logic:
async function safeChat(agent: Agent, message: string, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await agent.chat(message);
} catch (error) {
if (i === maxRetries - 1) throw error;
await new Promise(resolve => setTimeout(resolve, 1000 * (i + 1)));
}
}
}
Performance Considerations#
A few tips for production applications:
Model selection: GPT-4 is powerful but expensive. Use faster models like GPT-3.5-turbo or Claude Haiku for simple tasks. Reserve powerful models for complex reasoning.
Prompt engineering: Clear system prompts reduce back-and-forth. Tell agents exactly what format you expect in responses.
Token limits: Monitor token usage. Long conversations hit context limits. Implement summarization or memory pruning for long-running agents.
Caching: AIGNE supports prompt caching for providers that offer it. Repeated system prompts don't count against your token budget.
Troubleshooting#
"Module not found" errors: Ensure your tsconfig.json has "moduleResolution": "node" and package.json includes "type": "module".
API key issues: Double-check .env loading. You might need dotenv:
npm install dotenv
Then at the top of your file:
import 'dotenv/config';
Rate limit errors: Implement exponential backoff. Most providers have rate limits. Retry failed requests with increasing delays.
Unexpected responses: Check your system prompt. Vague instructions produce inconsistent results. Be specific about format, tone, and content.
The AIGNE Ecosystem#
AIGNE integrates with ArcBlock's broader platform:
Blocklet deployment: Package agents as Blocklets for one-click deployment and updates.
DID-based identity: Agents can authenticate using Decentralized Identifiers instead of API keys.
Self-hosting: Run everything on your infrastructure. No vendor lock-in, no data sharing requirements.
This integration means you can build AI agents locally, test them thoroughly, then deploy to production without changing code or managing infrastructure.
Conclusion#
You've built your first AI agent with AIGNE, added memory and file operations, implemented multi-agent workflows, and learned how to switch between LLM providers. The framework handles the complexity of LLM interactions while giving you full control through TypeScript's type system.
The key insight: AI agents are most useful when specialized, coordinated, and integrated with your existing codebase. AIGNE makes this possible without forcing you into Python-centric tooling or sacrificing type safety.
Start small with a single agent, validate it works for your use case, then expand into workflows as needs grow. The framework scales from simple chatbots to complex multi-agent systems without requiring architectural rewrites.
For more examples, check the AIGNE GitHub repository and join the community building the next generation of AI-native applications.