Governed Vibecoding vs Unmanaged AI CodingRead Now →
Skip to main content

What is n8n?

An open-source workflow automation platform that became one of the dominant AI orchestration tools of 2024-2026. Learn the architecture, the AI primitives, and where it fits next to dedicated agent platforms.

12 min read

What Is n8n

n8n (pronounced “n-eight-n”) is an open-source workflow automation tool that lets you build integrations and AI workflows on a visual canvas. You drag nodes onto the canvas, wire them together, and the platform runs the resulting workflow on a trigger or a schedule. It sits in the same category as Zapier and Make, but with two big differences: it is self-hostable and it ships with deep AI primitives.

Webhook

Trigger

Set

Transform

OpenAI

LLM

If

Branch

Slack

Action

Each node is a single step. Connect them on a canvas to build a workflow.

The project was started in 2019 by Jan Oberhauser in Berlin. It is licensed under the Sustainable Use License — commonly described as “fair-code” — which lets you self-host and modify the code freely for internal use, while reserving commercial resale to the n8n team. In practice this means most teams treat it like an open-source tool: clone it, run it in their own VPC, and pay for n8n Cloud or n8n Embed only when they want a managed plane.

n8n grew slowly through 2020-2023 as a developer-friendly Zapier alternative. The growth curve bent sharply upward in 2024 once the team shipped first-class LLM and vector-store nodes. By 2025 it had become one of the most-starred AI orchestration projects on GitHub, with the AI nodes accounting for a large share of new workflows shipped on n8n Cloud.

The Visual Canvas

Every n8n workflow is a graph of nodes connected by lines on a canvas. Data flows from left to right: a trigger node starts the run, downstream nodes transform or act on the data, and the workflow ends when the last node finishes (or branches into parallel paths via control nodes).

A node has three things: an icon, a config panel, and an output. You click the node, fill in the config (an API key, a SQL query, a prompt template), and the output appears as a JSON list of items. Items are the unit of work n8n moves between nodes — if a Webhook trigger receives 50 records, the next node runs 50 times by default. This item-based execution model is what lets n8n handle batches without you writing a loop.

The library breaks down into five categories. Counts below are approximate as of 2025-2026 and grow every release.

Triggers

30+

Webhook, Cron, Email, Form, Polling, Manual

Actions

400+

Slack, GitHub, Jira, Salesforce, Notion, Google Sheets…

Logic & Flow

20+

If, Switch, Merge, Loop, Wait, Set, Code

Data

30+

Spreadsheet, Item Lists, Aggregate, Date & Time

AI / LLM

60+

OpenAI, Anthropic, Gemini, vector stores, AI Agent

Beyond built-in nodes, the platform has two escape hatches: a Code node that runs JavaScript or Python inline, and a community nodes package system where teams publish their own nodes to npm. Together these mean nothing about n8n is a black box — if a connector you need does not exist, a Code node closes the gap in minutes.

Workflows are JSON, not magic

Every workflow on the canvas is just a JSON document under the hood. You can export it, diff it, commit it to git, and import it on another instance. This makes promotion across environments (dev → staging → prod) straightforward, even though it is not the path n8n optimizes for out of the box.

AI Workflows in n8n

n8n shipped its first OpenAI node years ago, but the 2024 LangChain integration is what turned it into a serious AI orchestration platform. The integration adds a family of nodes that map directly to LangChain abstractions — chat models, document loaders, text splitters, vector stores, retrievers, memory, output parsers, and most importantly the AI Agent node.

The AI Agent node is the unit that makes n8n agentic. You give it a prompt, plug in a chat model, attach tools (any other n8n node can be a tool), and the agent decides which tools to call to satisfy the prompt. This is the same agent loop you would build with LangChain in Python, exposed as a single visual node. For many use cases it removes the need to write any agent code at all.

Step 1

Webhook trigger

Receive a question via HTTP

Step 2

Document Loader

Pull the source corpus

Step 3

Text Splitter

Chunk by tokens or characters

Step 4

Embeddings + Vector Store

Embed and store in Pinecone or Qdrant

Step 5

AI Agent

Retrieve top-k and call the LLM with context

Step 6

Respond to Webhook

Return the grounded answer

A typical RAG workflow in n8n — six nodes wired on the canvas, no glue code.

The same building blocks let you assemble retrieval-augmented generation (RAG) without leaving the canvas. The pattern above — Webhook receives a query, Document Loader and Text Splitter prepare the corpus on a schedule, embeddings populate a vector store, an AI Agent retrieves and synthesizes, and the response goes back to the caller — covers most production RAG implementations.

n8n supports the providers you would expect: OpenAI, Anthropic Claude, Google Gemini, Mistral, Cohere, Groq, plus self-hosted models via Ollama or any OpenAI-compatible endpoint. Vector store nodes cover Pinecone, Qdrant, Supabase, Postgres (pgvector), Weaviate, and an in-memory store for prototyping. For deeper context on the retrieval side, see What is RAG?.

What Teams Build With n8n

n8n adoption inside enterprises usually starts with one or two of these patterns and expands from there. The patterns map cleanly to its strengths: external integration, visual logic, and (since 2024) AI orchestration.

  • 1. Internal automation glue

    The original n8n use case. Sync deals between Salesforce and HubSpot, file Jira tickets from Slack reactions, post GitHub releases to a Notion changelog. Workflows live in IT or RevOps and replace a pile of brittle Zapier zaps with self-hostable equivalents.

  • 2. AI-augmented data pipelines

    Read rows from a database, classify each with an LLM (sentiment, intent, PII tagging), write the result back. n8n's item-based execution is purpose-built for this shape — 10,000 rows in, 10,000 enriched rows out.

  • 3. RAG-backed support and Q&A

    A Slack message or a webhook triggers an AI Agent that searches an internal vector index and answers with citations. Most teams start here as their first AI workflow because the value (deflected support tickets) is measurable from week one.

  • 4. Triage and routing agents

    Inbound email or a fresh Jira issue lands in a workflow. An LLM categorizes it, decides who should own it, drafts a first response, and assigns the ticket. Humans approve or override. This is where most enterprises run into governance limits and start asking what audit trail looks like.

  • 5. Scheduled batch jobs with conditional branches

    Cron-triggered workflows that pull a report, call a model, branch based on the answer, and notify a channel or open a ticket only on certain conditions. These are the long-running “digital coworker” workflows that quietly do work overnight.

Self-Hosting and Deployment

n8n can run as a single Node.js process on a laptop, but the production shape uses queue mode — a setup where the editor, the workers, and the queue are independent processes that scale separately. This is the architecture that handles thousands of executions per day without the editor becoming sluggish.

Editor UI

Browser-based canvas where workflows are built and tested

Main Process

Webhook listener, scheduler, and workflow runner for non-queued executions

Queue (Redis)

Bull-based job queue when running in queue mode for production scale

Workers

Horizontally-scaled processes that pick up jobs and run individual workflow executions

Database

Stores workflows, credentials, executions, and history — SQLite for dev, Postgres or MySQL for production

Production deployments separate the editor, the queue, and the workers so each can scale independently.

The official Docker image is the most common starting point. For Kubernetes, the community Helm chart deploys main + workers + Redis + Postgres in one shot, and n8n's own docs cover the recommended values. n8n Cloud is a managed multi-tenant service that runs the same architecture for teams that do not want to operate the stack themselves.

From a security standpoint, three things matter on day one: network egress controls (n8n calls external SaaS APIs — you should know which ones), credential storage (use the encryption key environment variable and rotate it), and the editor authentication model (basic auth or SAML/SSO via reverse proxy). The audit log is workflow-level, not call-level — this is the limitation regulated teams hit first.

The credential blast radius

Every credential stored in n8n — Salesforce token, OpenAI key, GitHub PAT — is reachable by every workflow on the same instance. There is no per-workflow credential isolation by default. For multi-team environments, consider per-team instances or a credential-scoping layer in front of the platform.

Where n8n Fits in Your Stack

n8n competes in two adjacent markets: workflow automation (Zapier, Make, Workato, Tray) and AI agent orchestration (LangGraph, CrewAI, Axiom AI Studio, Langflow). It is one of the few platforms that lives in both, which is also the source of its largest tradeoff: it is broad rather than deep on either axis.

Versus Zapier and Make, n8n wins on self-hosting, the Code node, item-based batch execution, and price at scale. Zapier wins on connector breadth (5,000+ connectors versus 400+) and on a more polished cloud experience for non-technical users. Most engineering-led teams pick n8n; most marketing-and-ops-led teams pick Zapier or Make.

Versus Axiom AI Studio, the comparison is different in kind. n8n is workflow-automation-first with AI added on; AI Studio is agent-platform-first with workflow automation as one capability among many. The decision usually comes down to whether you are building workflows that happen to call an LLM or agents that happen to have a workflow. The two stacks frequently coexist, and we recommend that pattern explicitly: n8n for the integration plumbing, AI Studio for the governed agent surface.

Dimension
n8n
Axiom AI Studio
Origin
Workflow automation, since 2019
AI-native agent platform
Best at
Connecting 400+ SaaS apps
Building governed AI agents
Versioning
DB-stored JSON, optional git sync
Git-native, PR review, rollback
Observability
Per-workflow execution logs
Per-LLM-call traces, DORA metrics
Audit trail
Workflow-level
Model-call-level, compliance-grade
Policy enforcement
Role-based on workflows
Prompt + output policies on every call
Deployment
Self-host, Cloud, embedded
Kubernetes-native with built-in CI/CD
License
Sustainable Use (fair-code)
Commercial

Where regulated and enterprise teams hit n8n's limits is the same place every general-purpose automation tool hits them: token-cost observability is shallow, the audit trail is workflow-shaped instead of model-call-shaped, and prompt-and-output policy enforcement is not built in. None of this means n8n is the wrong tool — it means the AI compliance layer needs to live elsewhere. See What is AI Observability? for the dimensions to evaluate.

Getting Started

The fastest path is the official Docker image. docker run -p 5678:5678 n8nio/n8n brings up a single-process instance bound to http://localhost:5678. The first time you load the editor, n8n creates an owner account in the local SQLite database and you are in.

Step 1 — Build your first workflow on the canvas

Add a Manual Trigger, drop an HTTP Request node next to it, point it at a public JSON endpoint, and execute. The output panel shows the response as items. Add a Set node to extract a field, then a Slack node to post the result. You have just shipped the canonical “hello world” n8n workflow without a single line of code.

Step 2 — Add an AI Agent

Replace the manual trigger with a Webhook trigger. Add an AI Agent node, attach an OpenAI or Anthropic chat model node to it, and give it a prompt. Connect the agent's output to your Slack node. Now your workflow takes a question via HTTP, asks an LLM, and posts the answer to Slack. This is the template for almost every “ask our knowledge base” bot.

Step 3 — Move to production carefully

Switch from SQLite to Postgres, set the encryption key environment variable, enable queue mode with Redis, and put the editor behind your SSO. The official docs at docs.n8n.io have a production checklist — follow it before exposing the editor to anyone outside the platform team.

Step 4 — Decide where governance lives

Before you have a hundred AI workflows, decide where prompts, outputs, and tool calls get logged for audit. n8n's execution log captures workflow-level outcomes; if you need model-call-level evidence, route LLM traffic through a gateway in front of n8n. This is the simplest path to compliance-grade observability without losing n8n's velocity.

How Axiom Complements n8n

n8n is excellent at the integration and rapid-prototype layer of an AI program. It is not built to be the system of record for LLM calls, audit evidence, or agent governance — nor should it be. The platforms that solve those problems live one layer down, in front of the model and tool calls.

The two-layer pattern

  • n8n for integration logic and fast iteration on workflow ideas.
  • LLM Gateway in front of n8n for the audit trail, policy enforcement, and cost rollups n8n itself does not produce.
  • AI Studio for the agents that graduate from prototype to a production surface with full SDLC discipline.
  • Webhook bridge — n8n calls AI Studio agents over HTTP and AI Studio calls n8n workflows the same way. Neither replaces the other.

How Axiom differs

Axiom's LLM Gateway sits between n8n and your model providers, so every prompt and every response — from every workflow — lands in one normalized audit trail with cost attribution, policy enforcement, and OpenTelemetry traces. Axiom's AI Studio is where governed agents are built when n8n's workflow shape stops fitting the problem — agents that need code-grade versioning, PR review, Kubernetes-native deploy, and call-level observability.

Keep n8n. Add the governance layer.

Most teams do not need to migrate off n8n — they need to bring their AI activity in n8n into a single audit-grade trail. The LLM Gateway is one environment-variable change away from giving you that, with no workflow rewrites. When the workload outgrows n8n, AI Studio is where the governed-agent half lives.

See LLM Gateway

Add the governance layer to your n8n workflows

Axiom's LLM Gateway captures every prompt, response, and tool call from every n8n workflow — with cost attribution, policy enforcement, and OpenTelemetry traces.

Contact Us