Stack Integrations

Nestor connects to your existing tools and workflows. Out of the box, it integrates with Obsidian for knowledge management, n8n for automation, and OpenRouter for access to 300+ LLM models.

Obsidian Integration

Connect Nestor to your Obsidian vault for seamless knowledge management. Agents can read, write, search, and link notes in your vault.

Available Tools (4)

ToolDescription
obsidian_readRead a note from your vault by path or title
obsidian_writeCreate or update a note with Markdown content
obsidian_searchFull-text search across your entire vault
obsidian_linkCreate wiki-links between notes and manage backlinks

Configuration

# .nestor/config.yaml
integrations:
  obsidian:
    enabled: true
    vault_path: "/path/to/your/vault"
    default_folder: "Nestor"        # where Nestor creates notes
    template: "nestor-note"         # Obsidian template to use
    auto_link: true                # auto-create backlinks

Tip: Mission reports are automatically saved to your Obsidian vault when the integration is enabled. Each report becomes a linked note with tags, entities, and cross-references.

n8n Integration

Connect Nestor to n8n for workflow automation. Agents can trigger n8n workflows, send data, and receive results.

Available Tools (3)

ToolDescription
n8n_triggerTrigger an n8n workflow by webhook URL or workflow ID
n8n_sendSend structured data to an n8n workflow input
n8n_statusCheck the status and output of a running n8n execution

Configuration

# .nestor/config.yaml
integrations:
  n8n:
    enabled: true
    base_url: "http://localhost:5678"
    api_key: "${N8N_API_KEY}"        # from environment variable
    webhook_prefix: "/webhook"      # webhook URL prefix
    timeout: 30000                  # max wait time in ms

Use Cases

OpenRouter Integration

OpenRouter provides access to 300+ models from all major providers through a single API key. This is the fastest way to access models from Anthropic, OpenAI, Google, Meta, Mistral, and many more.

Setup

# During install, select OpenRouter as a provider
npx nestor-sh install

# Or add it manually
npx nestor-sh config set openrouter.api_key "sk-or-v1-..."

Using OpenRouter Models

# Create an agent with an OpenRouter model
npx nestor-sh agent create \
  --name researcher \
  --adapter openrouter \
  --model anthropic/claude-sonnet-4-6

# Or use any of 300+ models
npx nestor-sh agent create \
  --name fast-coder \
  --adapter openrouter \
  --model meta-llama/llama-3.3-70b-instruct

# Use in the shell
npx nestor-sh shell --agent researcher

Configuration

# .nestor/config.yaml
integrations:
  openrouter:
    enabled: true
    api_key: "${OPENROUTER_API_KEY}"
    site_url: "https://your-app.com"  # for ranking on openrouter.ai
    app_name: "My Nestor Instance"    # app identifier
    fallback_models:                 # fallback chain
      - anthropic/claude-sonnet-4-6
      - openai/gpt-4o
      - meta-llama/llama-3.3-70b-instruct

Tip: OpenRouter tracks usage and costs across all models. Use it as your primary adapter if you frequently switch between providers or want to try new models without managing multiple API keys.

Full Configuration Example

Here is a complete integration configuration combining all three services:

# .nestor/config.yaml — integrations section
integrations:
  obsidian:
    enabled: true
    vault_path: "~/Documents/MyVault"
    default_folder: "Nestor/Reports"
    auto_link: true

  n8n:
    enabled: true
    base_url: "http://localhost:5678"
    api_key: "${N8N_API_KEY}"
    timeout: 30000

  openrouter:
    enabled: true
    api_key: "${OPENROUTER_API_KEY}"
    fallback_models:
      - anthropic/claude-sonnet-4-6
      - openai/gpt-4o
      - google/gemini-2.5-pro

Important: Never hardcode API keys in configuration files. Use environment variables (${VAR_NAME} syntax) or the secure key store (npx nestor-sh config set).