Create an AI Agent
Build an AI Agent that runs on-device with Model Context Protocol (MCP) tools. This tutorial uses create-mim to scaffold a complete AI Agent project.
If you haven't already, read How AI Agent mims Work first to understand the architecture and core concepts. This tutorial puts those concepts into practice.
What You'll Build
An AI Agent mim that:
- Exposes an MCP server with custom tools
- Uses
@mimik/agent-kitfor AI orchestration - Streams responses via Server-Sent Events (SSE)
- Discovers nearby devices using mimOE's mesh capabilities
Prerequisites
Before starting, ensure you have:
- Node.js 18+ installed
- mimOE running with the AI Foundation Addon (see Quick Start)
- A model with tool use support loaded for inference (see below)
Verify mimOE is running:
curl http://localhost:8083/mimik-ai/openai/v1/models \
-H "Authorization: Bearer 1234"
Model Requirements for Function Calling
Not all models support function calling (tool use). The model must be specifically trained for tool use capabilities. Models without this training cannot be used for AI Agents that need to call MCP tools.
Recommended model: Qwen3 (e.g., qwen3-1.7b). It provides both reasoning and tool use support, optimized for on-device inference.
Context Size Requirements
Tool use requires a larger context window because the conversation includes tool definitions, tool calls, and tool results. Two settings control this:
-
initContextSize(Model Registry): Total context window size (input + output combined). Set when uploading the model:{
"initContextSize": 12000
}For tool-using agents, 12K+ tokens is recommended. See Model Registry API.
-
max_tokens(Inference/agent-kit): Maximum tokens to generate in the response. Set in your agent configuration:llm: {
max_tokens: 4096, // Maximum output tokens
// ... other config
}This limits output length, not total context. See Inference API.
initContextSize is the total budget. Your input (instructions + tool definitions + history) uses part of it, leaving the rest for output. If your input uses 8K tokens and initContextSize is 12K, you have 4K tokens available for the response.
Step 1: Create the Project
Use create-mim to scaffold an AI Agent project:
npx create-mim create my-agent -t ai-agent
This creates a my-agent/ directory with the complete project structure.
Navigate to the project and install dependencies:
cd my-agent
npm install
Step 2: Project Structure
The generated project contains:
my-agent/
├── src/
│ ├── index.js # Router with /mcp, /healthcheck, /chat endpoints
│ ├── agent.js # AI Agent orchestration
│ └── tools.js # MCP tools definition
├── config/
│ └── default.yml # Package configuration
├── test/
│ └── simple.http # REST client tests
├── package.json