FreX By OpenClaw · Est. 2026
§ Comparison · Five stacks, one table

Self-hosted AI agents, compared.

If you want AI you control — on your hardware, your keys, your rules — the choice is between about five serious stacks. Here they are, side by side, with honest notes about what each is good at and where each falls short.

The matrix.

Last updated April 2026
Dimension FreX / OpenClaw Hermes (Nous) Ollama LocalAI n8n + LLM
What it isAgent platform + guideSelf-improving agentLocal model runnerOpenAI-compat serverWorkflow automation
Primary focusDo real workLearn over timeRun models locallyAPI-compat local inferenceVisual automation
Persistent memoryYes (3-layer)Yes (learning loop)NoNoVia node
Identity / SOULYesPartialNoNoNo
Messaging channelsTelegram, Slack, Signal, Discord, SMSVia integrationsNone built-inNone built-inAll the major ones
Tool useFile, shell, web, email, calendarRich tool libraryModel onlyModel onlyHuge node ecosystem
Scheduled / cronYesYesNoNoYes (core feature)
Runs local modelsYes (via Ollama/LocalAI)YesYes (its job)Yes (its job)Via external
Works with API providersAnthropic, OpenAI, OpenRouterAny OpenAI-compatibleNot directlyIt IS the APIAny
Docs / supportGuide + OpenClaw docsNous Research docsStrong communityGitHub + discordLarge community
Learning curveLow-mediumMediumLow (just run)MediumMedium-high (visual)
PricingFree tutorial, $15 guideFree (open source)Free (open source)Free (open source)Self-hosted free; cloud tier
LicenseOpenClaw: open. Guide: paid.Open sourceMITMITSustainable use license

When to pick which.

Pragmatic guide
§ 01

FreX / OpenClaw

You want a working agent this afternoon. Messaging channels, tools, identity, memory — all wired up. Pay $15 for the guide if you want the playbook; use the free tutorial if you don't.

§ 02

Hermes Agent

You want an agent that learns from every task and builds its own skill library over time. Open source, Nous Research authored. Pairs well with OpenClaw via migration.

§ 03

Ollama

You just want to run Llama/Qwen/Mistral locally with one command. Pair it with FreX or Hermes for actual agent behavior; Ollama alone is just the engine.

§ 04

LocalAI

You have code that expects an OpenAI-compatible endpoint and want to swap in local inference without rewriting. Drop-in replacement; no agent framework.

§ 05

n8n + LLM

You think in workflows, not agents. You want a visual canvas connecting Slack to Gmail to an LLM to Notion. Excellent if you already use n8n for automation.

Want the turnkey stack?

FreX gives you identity, memory, messaging, and tools out of the box. Free to try; $15 for the full playbook.

Free setup tutorial Get the guide — $15