# Infrastructure for Agents

> Agent-facing map of Sean/log1kal's project infrastructure. Use this as a routing table, not a diary.
>
> Last verified: `2026-05-05T22:14:24Z`

## Agent contract

- Default project host: `atom`.
- Tailnet domain: `tail6a522.ts.net`.
- Host-addressed atom DNS: `atom.tail6a522.ts.net`.
- Atom LAN IP used by many services: `192.168.1.12`.
- Durable Docker config/state convention: `/zpool1/docker_configs/<service>/`.
- Larger service data may live under `/zpool1/docker_data/<service>/`.
- Hermes workspace/repo path: `/opt/data/workspace` inside the Hermes container, mapped to `/home/hermes/workspace` on atom.
- Prefer Portainer-managed stacks for long-lived services on atom.
- Do not assume `hermes` has passwordless sudo on atom. It does not.
- Do not edit/redeploy the Hermes stack casually; restarting it can interrupt the active agent session.
- Do not print secrets. This page lists secret *locations*, not secret values.

## Access patterns

| Need | Use |
|---|---|
| Host shell/build orchestration | SSH to `hermes@atom.tail6a522.ts.net` |
| Long-lived Docker services | Portainer API / Portainer stack on endpoint `2` |
| Repo/work files | `/home/hermes/workspace` on atom or `/opt/data/workspace` in Hermes |
| Routine agent memory | MCP memory service |
| Bulk vector/RAG storage | Qdrant |
| Credentials | 1Password MCP or configured secret files; never hardcode |
| Monitoring | Uptime Kuma |
| Workflow automation | n8n |

## Core host: atom

| Field | Value |
|---|---|
| Tailnet hostname | `atom.tail6a522.ts.net` |
| Common LAN IP | `192.168.1.12` |
| Primary agent user | `hermes` |
| Hermes UID/GID in container | `10000:10000` |
| Docker manager | Portainer |
| Portainer endpoint ID | `2` |
| Portainer API from atom | `https://localhost:9443/api` |
| Portainer API token path | `/home/hermes/.config/portainer/api_token` |
| Preferred bind root | `/zpool1/docker_configs/<service>/` |
| Shared workspace | `/home/hermes/workspace` |

### Atom operational rules

1. Use SSH for host commands: `ssh atom.tail6a522.ts.net '...'`.
2. Use Portainer API for Docker inspection/mutation; `hermes` does not have direct Docker socket access.
3. Use host-visible bind mounts for service persistence.
4. For services needing tailnet identity, prefer a Tailscale sidecar with `network_mode: service:<tailscale-service>`.
5. Verify service health after stack changes through the service API, not just container state.

## Published / agent-relevant services

### Portainer

| Field | Value |
|---|---|
| Purpose | Docker stack/container management |
| Container | `portainer` |
| API from atom | `https://localhost:9443/api` |
| UI/API host port | `9443/tcp` on atom |
| Edge port | `8000/tcp` on atom |
| Endpoint ID | `2` |
| Token path | `/home/hermes/.config/portainer/api_token` |

Use for creating/updating/verifying stacks. TLS is self-signed locally; API clients usually need TLS verification disabled.

### Hermes Agent

| Field | Value |
|---|---|
| Stack | `hermes` / Portainer stack ID `100` |
| Containers | `hermes`, `hermes-dashboard` |
| Gateway port | `8642/tcp` on atom |
| Dashboard port | `9119/tcp` on atom |
| Container image | `ghcr.io/1kpio/hermes-ssh-image:latest` |
| Hermes home in container | `/opt/data` |
| Host config mount | `/zpool1/docker_configs/hermes-agent/home:/opt/data` |
| Workspace mount | `/home/hermes/workspace:/opt/data/workspace` |
| Config file | `/opt/data/config.yaml` |
| Env file on host | `/zpool1/docker_configs/hermes-agent/home/.env` |

Notes:

- Current active agent may be running here. Treat stack edits as disruptive.
- SSH key behavior inside Hermes resolves under `HERMES_HOME` (`/opt/data/.ssh`) unless an explicit `IdentityFile` is used.
- Hermes external memory provider is Supermemory; see memory section.

### n8n

| Field | Value |
|---|---|
| Stack | `n8n` / Portainer stack ID `101` |
| Containers | `ts-n8n`, `n8n` |
| Tailnet URL | `https://n8n.tail6a522.ts.net/` |
| Health URL | `https://n8n.tail6a522.ts.net/healthz` |
| LAN bind | `192.168.1.12:5678` |
| App port | `5678/tcp` |
| Persistent data | `/zpool1/docker_configs/n8n/data:/home/node/.n8n` |
| Cache | `/zpool1/docker_configs/n8n/cache:/home/node/.cache` |
| Local files | `/zpool1/docker_configs/n8n/local-files:/files` |

Use for workflow automation and webhook glue. Webhook base is `https://n8n.tail6a522.ts.net/`.

### MCP memory service

| Field | Value |
|---|---|
| Stack | `mcp-memory` / Portainer stack ID `99` |
| Containers | `ts-memory`, `mcp-memory` |
| Tailnet URL | `https://mcp-memory-service.tail6a522.ts.net/` |
| Internal app port | `8000/tcp` |
| Backend | `sqlite_vec` |
| SQLite DB in container | `/app/data/memory.db` |
| Host data | `/zpool1/docker_configs/mcp-memory/data` |
| Anonymous access | Enabled in current stack config |

Known REST endpoints:

```text
POST /api/memories
GET  /api/memories
POST /api/search
POST /api/search/by-tag
POST /api/search/by-time
GET  /api/search/similar/{content_hash}
```

Use this for routine cross-agent durable memory. Do not use Qdrant directly for small preference/fact writes unless building a bulk RAG workflow.

### Qdrant

| Field | Value |
|---|---|
| Stack | `qdrant` / Portainer stack ID `102` |
| Container | `qdrant` |
| Image | `qdrant/qdrant:v1.17.1` |
| HTTP API | `http://100.81.104.62:6333` |
| gRPC API | `100.81.104.62:6334` |
| Readiness | `http://100.81.104.62:6333/readyz` |
| Health | `http://100.81.104.62:6333/healthz` |
| API key path on atom | `/home/hermes/.config/qdrant/api_key` |
| Storage | `/zpool1/docker_configs/qdrant/storage:/qdrant/storage` |
| Snapshots | `/zpool1/docker_configs/qdrant/snapshots:/qdrant/snapshots` |

Use for bulk vector search, RAG corpora, document/session archives, and embeddings-backed project indexes.

### Ollama / Open WebUI / LiteLLM

| Field | Value |
|---|---|
| Stack | `ollama` / Portainer stack ID `63` |
| Containers | `ts-aichat-server`, `ollama`, `open-webui`, `litellm` |
| Tailnet URL | `https://aichat.tail6a522.ts.net/` |
| Ollama API via Tailnet | `https://aichat.tail6a522.ts.net/ollama` |
| Chat API example | `https://aichat.tail6a522.ts.net/ollama/api/chat` |
| Tailscale hostname | `aichat` |
| Ollama persistence | `/zpool1/docker_data/ollama:/root/.ollama` |
| Open WebUI persistence | `/zpool1/docker_data/open-webui:/app/backend/data` |
| GPU | Ollama stack reserves NVIDIA GPUs |

Use for local/open model inference when an agent needs tailnet-accessible model serving.

### Uptime Kuma

| Field | Value |
|---|---|
| Stack | `uptime-kuma` / Portainer stack ID `103` |
| Containers | `ts-uptime-kuma`, `uptime-kuma` |
| Tailnet URL | `https://uptime-kuma.tail6a522.ts.net/` |
| LAN bind | `192.168.1.12:3001` |
| App port | `3001/tcp` |
| Data | `/zpool1/docker_configs/uptime-kuma/data:/app/data` |
| Docker socket | `/var/run/docker.sock:/var/run/docker.sock:ro` |
| Admin password path | `/home/hermes/.config/uptime-kuma/admin_password` |

Use for service monitoring. Prefer application-level HTTP/API probes over container-only checks.

### ntfy

| Field | Value |
|---|---|
| Stack | `ntfy` / Portainer stack ID `38` |
| Container | `ntfy` |
| Tailnet URL | `https://ntfy.tail6a522.ts.net/` |
| Health URL | `https://ntfy.tail6a522.ts.net/v1/health` |
| LAN bind | `192.168.1.12:5080` |
| App port | `80/tcp` in container |
| Cache | `/zpool1/docker_configs/ntfy/var/cache/ntfy` |
| Config | `/zpool1/docker_configs/ntfy/etc/ntfy` |
| Common alerts topic | `alerts` |

Use for push notifications and alert delivery. Verify message history via `/alerts/json?...` when debugging mobile display problems.

### OtterWiki

| Field | Value |
|---|---|
| Stack | `otter` / Portainer stack ID `79` |
| Containers | `ts-otter-server`, `otter-otterwiki-1` |
| Tailnet URL | `https://otter.tail6a522.ts.net/` |
| App image | `redimp/otterwiki:2` |
| Host data | `/zpool1/docker_configs/otter/app-data` |
| Wiki repository | `/zpool1/docker_configs/otter/app-data/repository` |

This page is hosted here. OtterWiki stores pages as Markdown files in a Git-backed repository.

### Caddy

| Field | Value |
|---|---|
| Stack | `caddy` / Portainer stack ID `11` |
| Container | `caddy` |
| Public HTTP | `80/tcp` on atom |
| Public HTTPS | `443/tcp` on atom |
| Caddyfile | `/zpool1/docker_configs/caddy/Caddyfile` |
| Site root | `/zpool1/srv` mounted at `/srv` |
| Config/data | `/zpool1/docker_configs/caddy/config`, `/zpool1/docker_configs/caddy/data` |

Use for conventional HTTP reverse proxy/site hosting when not using Tailscale Serve.

### SearXNG

| Field | Value |
|---|---|
| Stack | `searxng` / Portainer stack ID `66` |
| Containers | `ts-searxng-server`, `searxng-searxng-1` |
| Tailnet URL | `https://searxng.tail6a522.ts.net/` |

Use as a private metasearch endpoint when browser/search tools are constrained.

### OpenClaw

| Field | Value |
|---|---|
| Stack | `openclaw` / Portainer stack ID `96` |
| Containers | `ts-openclaw`, `openclaw-gateway` |
| Image | `openclaw:local` |
| Host config referenced by Hermes | `/zpool1/docker_configs/openclaw/.openclaw` mounted at `/opt/openclaw` |

Use only if the current task specifically needs OpenClaw. Do not infer its API surface from this page alone.

### Kasm browser/desktops

| Stack | Containers | Published port / URL hints |
|---|---|---|
| `kasm-ubuntu` / ID `97` | `tailscale-kasm-ubuntu`, `kasm-ubuntu` | `6901/tcp` published |
| `kasm-claw` / ID `98` | `tailscale-kasm-claw`, `kasm-claw` | `6910/tcp` published |

Use for GUI/browser desktop workloads when available. Check current health before depending on them.

## Other running services on atom

These exist and may be useful, but many are personal/media/home services rather than agent infrastructure.

| Service/stack | Known port or endpoint |
|---|---|
| `adguardhome` | DNS `192.168.1.12:53` TCP/UDP; UI `192.168.1.12:8008` |
| `frigate` | HTTP `:5000`, RTSP `:8554`, WebRTC `:8555`, go2rtc `:1984` |
| `mealie` | `:9925` |
| `scrutiny` | `192.168.1.12:4080`, collector `:4086` |
| `rebootcontrol` | `:7234` |
| `radarr` | `:7878` |
| `sonarr` | `:8989` |
| `bazarr` | `:6767` |
| `sabnzbd` | `:8080` |
| `overseerr` | `:5055` |
| `tautulli` | `:8181` |
| `readarr` | `:9157` |
| `calibre` | `:19901`, `:19902`, `:19903` |
| `heimdall` | `:7080`, `:7443` |
| `pairdrop` | `:4000` |
| `homarr` | `:7575` |
| `stash` / `xbvr` | `:9999`; xbvr also `:12345` |

Check live Portainer state before using any of these.

## Memory architecture

| Layer | Role | Endpoint / location |
|---|---|---|
| Hermes Supermemory provider | Hermes's own external long-term memory provider | Configured in `/opt/data/config.yaml`; API key in `/opt/data/.env` |
| MCP memory service | Shared agent-facing memory front door | `https://mcp-memory-service.tail6a522.ts.net/` |
| Qdrant | Bulk vector/RAG/document/session substrate | `100.81.104.62:6333/6334` |
| Shared-memory skill repo | Policy/runbook for other agents | `https://github.com/dixie-rom/agent-shared-memory-skill` |

Rules:

- Store durable facts/preferences/architecture decisions in MCP memory or the agent's native memory layer.
- Store large documents, session archives, and embedding corpora in Qdrant-backed workflows.
- Never store secrets in memory or Qdrant.
- Prefer concise, stable facts over task-progress logs.

## GitHub / code hosting

- Stay on GitHub. Sean decided not to migrate Git hosting.
- Host-side `gh` is authenticated on atom as the working GitHub identity.
- For coding work on atom, use `/home/hermes/workspace`.
- For agents that need reusable shared-memory behavior, install the skill repo:

```bash
npx skills add dixie-rom/agent-shared-memory-skill --all
```

## 1Password

| Field | Value |
|---|---|
| Hermes-accessible vault name | `Dixie` |
| Preferred access | 1Password MCP tools when present |

Do not dump secret values into chat/wiki/logs. If a secret must be referenced, document only its vault/item/field or local file path.

## Current Portainer stack inventory

Status codes are Portainer status values from live inspection. `1` generally means active; `2` means inactive/stopped/limited depending on stack state.

| ID | Stack | Status | Agent relevance |
|---:|---|---:|---|
| 11 | `caddy` | 1 | Public reverse proxy/site host |
| 38 | `ntfy` | 1 | Notifications/alerts |
| 63 | `ollama` | 1 | Local model serving / Open WebUI |
| 66 | `searxng` | 1 | Search endpoint |
| 79 | `otter` | 1 | Wiki/docs hosting |
| 96 | `openclaw` | 1 | Specialized agent/browser gateway |
| 97 | `kasm-ubuntu` | 1 | GUI desktop/browser |
| 98 | `kasm-claw` | 1 | GUI desktop/browser |
| 99 | `mcp-memory` | 1 | Shared agent memory |
| 100 | `hermes` | 1 | Active assistant/gateway |
| 101 | `n8n` | 1 | Workflow automation |
| 102 | `qdrant` | 1 | Vector DB/RAG |
| 103 | `uptime-kuma` | 1 | Monitoring |

Other stacks exist on atom. Query Portainer before making assumptions.

## Verification commands

### Check Portainer stacks from atom

```bash
ssh atom.tail6a522.ts.net 'python3 - <<"PY"
import json, ssl, urllib.request
base = "https://localhost:9443/api"
token = open("/home/hermes/.config/portainer/api_token").read().strip()
ctx = ssl.create_default_context(); ctx.check_hostname = False; ctx.verify_mode = ssl.CERT_NONE
req = urllib.request.Request(base + "/stacks", headers={"X-API-Key": token})
with urllib.request.urlopen(req, context=ctx) as r:
    for s in json.load(r):
        print(s["Id"], s["Name"], s["EndpointId"], s.get("Status"))
PY'
```

### Check Qdrant

```bash
ssh atom.tail6a522.ts.net 'curl -fsS http://100.81.104.62:6333/readyz'
```

### Check n8n

```bash
ssh atom.tail6a522.ts.net 'curl -fsS https://n8n.tail6a522.ts.net/healthz'
```

### Check ntfy

```bash
ssh atom.tail6a522.ts.net 'curl -fsS https://ntfy.tail6a522.ts.net/v1/health'
```

## Pitfalls

- `atom.tail6a522.ts.net` is the host. Service-specific Tailscale names like `n8n.tail6a522.ts.net` or `aichat.tail6a522.ts.net` may belong to sidecars.
- Tailscale Docker env var is usually `TS_AUTHKEY`, not `TS_AUTH_KEY`; old stacks may still show historical variants.
- Secret gists are unlisted, not private. Do not use them as a vault.
- Portainer stack files may contain secret values in environment fields. Redact before quoting.
- Container state `running` is not enough. Hit the health/API endpoint.
- The OtterWiki repository on disk is owned by the wiki container user, so host writes may require container/Portainer-side file updates.
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9