Examples
Ready to see the AIGNE Framework in action? This section provides a comprehensive collection of practical examples that demonstrate various features and workflow patterns. You can skip the complex setup and dive straight into running functional agents with one-click commands.
Overview#
The AIGNE Framework examples offer hands-on demonstrations for a range of applications, from intelligent chatbots to complex, multi-agent workflows. Each example is a self-contained, executable demo designed to illustrate a specific capability of the framework. You can explore topics such as Model Context Protocol (MCP) integration, memory persistence, concurrent and sequential task processing, and dynamic code execution.
For detailed information on a specific feature or workflow, please refer to the corresponding example document:
Quick Start (No Installation Required)#
You can run any example directly from your terminal using npx without needing to clone the repository or perform a local installation.
Prerequisites#
Ensure you have Node.js (version 20.0 or higher) and npm installed on your system.
Running an Example#
The following command executes the basic chatbot example in one-shot mode, where it takes a default prompt, provides a response, and then exits.
Run in one-shot mode
npx -y @aigne/example-chat-botTo have an interactive conversation with the agent, add the --chat flag.
Run in interactive mode
npx -y @aigne/example-chat-bot --chatYou can also pipe input directly to the agent.
Use pipeline input
echo "Tell me about AIGNE Framework" | npx -y @aigne/example-chat-botConnecting to an AI Model#
Running an example requires a connection to an AI model. If you run a command without any prior configuration, you will be prompted to connect.

You have three options to establish a connection:
1. Connect to the Official AIGNE Hub#
This is the recommended option for new users. The AIGNE Hub provides a seamless connection experience and grants new users free tokens to get started immediately.
- Select the first option in the prompt.
- Your browser will open the official AIGNE Hub page.
- Follow the on-screen instructions to authorize the AIGNE CLI.

2. Connect to a Self-Hosted AIGNE Hub#
If your organization runs a private instance of the AIGNE Hub, you can connect to it directly.
- Select the second option in the prompt.
- Enter the URL of your self-hosted AIGNE Hub and follow the prompts to complete the connection.

If you need to deploy your own AIGNE Hub, you can do so from the Blocklet Store.
3. Connect via Third-Party Model Provider#
You can connect directly to a third-party AI model provider by setting the appropriate environment variables. Exit the interactive prompt and configure the API key for your chosen provider.
For example, to use OpenAI, set the OPENAI_API_KEY environment variable:
Set your OpenAI API key
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"After setting the key, run the example command again.
Configuring Language Models#
The examples can be configured to use various large language models by setting the MODEL environment variable along with the corresponding API key. The MODEL variable follows the format provider:model-name.
OpenAI#
OpenAI Configuration
export MODEL=openai:gpt-4o
export OPENAI_API_KEY=YOUR_OPENAI_API_KEYAnthropic#
Anthropic Configuration
export MODEL=anthropic:claude-3-5-sonnet-20240620
export ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEYGoogle Gemini#
Google Gemini Configuration
export MODEL=gemini:gemini-1.5-flash
export GEMINI_API_KEY=YOUR_GEMINI_API_KEYAWS Bedrock#
AWS Bedrock Configuration
export MODEL=bedrock:anthropic.claude-3-sonnet-20240229-v1:0
export AWS_ACCESS_KEY_ID="YOUR_AWS_ACCESS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_ACCESS_KEY"
export AWS_REGION="us-east-1"DeepSeek#
DeepSeek Configuration
export MODEL=deepseek:deepseek-chat
export DEEPSEEK_API_KEY=YOUR_DEEPSEEK_API_KEYDoubao#
Doubao Configuration
export MODEL=doubao:Doubao-pro-128k
export DOUBAO_API_KEY=YOUR_DOUBAO_API_KEYxAI (Grok)#
xAI Configuration
export MODEL=xai:grok-1.5-flash
export XAI_API_KEY=YOUR_XAI_API_KEYOllama (Local Models)#
Ollama Configuration
export MODEL=ollama:llama3
export OLLAMA_DEFAULT_BASE_URL="http://localhost:11434"LMStudio (Local Models)#
LMStudio Configuration
export MODEL=lmstudio:local-model/llama-3.1-8b-instruct-gguf
export LM_STUDIO_DEFAULT_BASE_URL="http://localhost:1234/v1"For a complete list of supported models and their configuration details, please refer to the Models Overview section.
Debugging and Observation#
To gain insight into an agent's execution flow, you can use two primary methods: debug logs for real-time terminal output and the AIGNE observability server for a more detailed, web-based analysis.
Debug Logs#
Enable debug logging by setting the DEBUG environment variable. This will print detailed information about model calls, responses, and other internal operations directly to your terminal.
Enable Debug Logs
DEBUG=* npx -y @aigne/example-chat-bot --chatAIGNE Observe#
The aigne observe command starts a local web server to monitor and analyze agent execution data. This tool is essential for debugging, performance tuning, and understanding how your agent processes information.
- Install AIGNE CLI:
Install AIGNE CLI
npm install -g @aigne/cli - Start the observation server:
Start observation server
aigne observe
- View traces:
After running an example, open your browser tohttp://localhost:7893to inspect traces, view detailed call information, and understand your agent’s runtime behavior.