Stack Integrations
Nestor connects to your existing tools and workflows. Out of the box, it integrates with Obsidian for knowledge management, n8n for automation, and OpenRouter for access to 300+ LLM models.
Obsidian Integration
Connect Nestor to your Obsidian vault for seamless knowledge management. Agents can read, write, search, and link notes in your vault.
Available Tools (4)
| Tool | Description |
|---|---|
obsidian_read | Read a note from your vault by path or title |
obsidian_write | Create or update a note with Markdown content |
obsidian_search | Full-text search across your entire vault |
obsidian_link | Create wiki-links between notes and manage backlinks |
Configuration
# .nestor/config.yaml
integrations:
obsidian:
enabled: true
vault_path: "/path/to/your/vault"
default_folder: "Nestor" # where Nestor creates notes
template: "nestor-note" # Obsidian template to use
auto_link: true # auto-create backlinks
Tip: Mission reports are automatically saved to your Obsidian vault when the integration is enabled. Each report becomes a linked note with tags, entities, and cross-references.
n8n Integration
Connect Nestor to n8n for workflow automation. Agents can trigger n8n workflows, send data, and receive results.
Available Tools (3)
| Tool | Description |
|---|---|
n8n_trigger | Trigger an n8n workflow by webhook URL or workflow ID |
n8n_send | Send structured data to an n8n workflow input |
n8n_status | Check the status and output of a running n8n execution |
Configuration
# .nestor/config.yaml
integrations:
n8n:
enabled: true
base_url: "http://localhost:5678"
api_key: "${N8N_API_KEY}" # from environment variable
webhook_prefix: "/webhook" # webhook URL prefix
timeout: 30000 # max wait time in ms
Use Cases
- Trigger email notifications when missions complete
- Push findings to a CRM or project management tool
- Automate data pipeline: agent research to spreadsheet to report
- Connect to Slack, Discord, or Telegram for real-time updates
OpenRouter Integration
OpenRouter provides access to 300+ models from all major providers through a single API key. This is the fastest way to access models from Anthropic, OpenAI, Google, Meta, Mistral, and many more.
Setup
# During install, select OpenRouter as a provider
npx nestor-sh install
# Or add it manually
npx nestor-sh config set openrouter.api_key "sk-or-v1-..."
Using OpenRouter Models
# Create an agent with an OpenRouter model
npx nestor-sh agent create \
--name researcher \
--adapter openrouter \
--model anthropic/claude-sonnet-4-6
# Or use any of 300+ models
npx nestor-sh agent create \
--name fast-coder \
--adapter openrouter \
--model meta-llama/llama-3.3-70b-instruct
# Use in the shell
npx nestor-sh shell --agent researcher
Configuration
# .nestor/config.yaml
integrations:
openrouter:
enabled: true
api_key: "${OPENROUTER_API_KEY}"
site_url: "https://your-app.com" # for ranking on openrouter.ai
app_name: "My Nestor Instance" # app identifier
fallback_models: # fallback chain
- anthropic/claude-sonnet-4-6
- openai/gpt-4o
- meta-llama/llama-3.3-70b-instruct
Tip: OpenRouter tracks usage and costs across all models. Use it as your primary adapter if you frequently switch between providers or want to try new models without managing multiple API keys.
Full Configuration Example
Here is a complete integration configuration combining all three services:
# .nestor/config.yaml — integrations section
integrations:
obsidian:
enabled: true
vault_path: "~/Documents/MyVault"
default_folder: "Nestor/Reports"
auto_link: true
n8n:
enabled: true
base_url: "http://localhost:5678"
api_key: "${N8N_API_KEY}"
timeout: 30000
openrouter:
enabled: true
api_key: "${OPENROUTER_API_KEY}"
fallback_models:
- anthropic/claude-sonnet-4-6
- openai/gpt-4o
- google/gemini-2.5-pro
Important: Never hardcode API keys in configuration files. Use environment variables (${VAR_NAME} syntax) or the secure key store (npx nestor-sh config set).