Building AI Agents with n8n: Low-Code approach to AI workflows

blog preview

Introduction

In our previous guide, How to Build AI Agents Without Frameworks, we demonstrated how to build an AI agent from scratch. That tutorial was designed to show the importance of creating agents manually in order to learn every detail of what's happening under the hood. It gave you complete control and a deep understanding of the process. This approach is great and necessary to create good, production-ready and maintainable AI agents.

Now, we're taking a radically different approach: building AI agents using n8n. n8n is a low-code workflow automation tool that allows you to visually design automated processes while integrating different APIs and services. This low-code method is all about rapid prototyping, allowing you to quickly assemble automated workflows using visual drag‑and‑drop interface. This approach is the radical opposite of writing custom code — it sacrifices some control for speed and ease-of-use, which is perfect when you need to iterate fast.

The main benefit of using n8n is it's vast library of API nodes which make integrating external APIs very easy. That's where this method of using low-code agents shine: You can quickly connect to different services and APIs without needing to know the individual API details (which is the most annoying part of building custom agents).

What is n8n?

n8n is an open-source workflow automation tool that allows you to visually design automated processes. With its intuitive drag‑and‑drop interface, n8n allows both developers and non‑developers to connect and integrate various APIs and services.

Key aspects of n8n include:

  • No-Code/Low-Code Approach: Build complex automation workflows without writing extensive code.
  • Extensibility: Easily extend the platform with custom integrations when needed.
  • Visual Debugging: Monitor execution flows in real time and troubleshoot issues easily.
  • Versatility: Connect to a wide range of external services and APIs—whether it's databases, web services, or AI models—to create dynamic and responsive agents.

In addition to these features, n8n offers over 1,200 pre-made workflows to help you get started quickly – check them out here.

One of the most compelling aspects of n8n is it's option to extend the UI-workflows with custom code when needed (and you'll need custom code eventually!). n8n even allows to pull in external npm packages.

Out of all the low-code platforms we've tried, n8n has the by far best AI agent nodes. It's easy to get started and it's easy to configure to your need. Many other tools lack especially in this latter part.

n8n Workflown8n Workflow

Installing and running n8n

To get started with n8n, you can either sign up for their cloud service or install the self-hosted version on your own premises.

If you just want to get started, use the cloud version, it offers a free trial and very competitive pricing. For the sake of this tutorial, we'll run our own instance of n8n, however.

  1. Create a new directory for your n8n data persistence. Make sure to adjust this path to where you want to store your data.

    1mkdir -p /data/n8n-data
  2. Run the following command to start n8n using Docker. Make sure to adjust the path to your data directory.

    1docker run -it --rm \
    2 --name n8n \
    3 -p 5678:5678 \
    4 -v /data/n8n-data:/home/node/.n8n \
    5 docker.n8n.io/n8nio/n8n

By default, n8n uses SQLite as its database. If you want to use a PostgreSQL database for more robust data storage, you can specify the connection details as follows:

1docker run -it --rm \
2 --name n8n \
3 -p 5678:5678 \
4 -e DB_TYPE=postgresdb \
5 -e DB_POSTGRESDB_DATABASE=<POSTGRES_DATABASE> \
6 -e DB_POSTGRESDB_HOST=<POSTGRES_HOST> \
7 -e DB_POSTGRESDB_PORT=<POSTGRES_PORT> \
8 -e DB_POSTGRESDB_USER=<POSTGRES_USER> \
9 -e DB_POSTGRESDB_SCHEMA=<POSTGRES_SCHEMA> \
10 -e DB_POSTGRESDB_PASSWORD=<POSTGRES_PASSWORD> \
11 -v /data/n8n_data:/home/node/.n8n \
12 docker.n8n.io/n8nio/n8n

This setup will get you going and allows to create and run flows. As we love and suggest n8n as rapid prototyping tool, we discourage using it for high-volume, high-availability production workflows. For that, you still should stick to good old custom code. However, if you want to scale n8n workflows, there are good resources available here

You can find your instance of n8n running at http://localhost:5678. The first time you access it, you'll be prompted to create an account and set up your workspace.

n8n account creationn8n account creation

Fill out your details and create your account. Once you're done, you'll see your empty n8n workspace.

n8n workspacen8n workspace

Building an AI Agent with n8n

Now that we have ourselves a running n8n instance, let's discuss what we're going to build. Similarly to our previous guides about advanced agent creation, we'll create a simple chat agent which responds to user questions and has two different tools- a tool to search wikipedia and a tool to search a database. This highlights the agents capabilities to find the right tool for the job and will serve you as great blueprint for your own integrations.

So, to summarize, we'll create:

  • an AI agent that listens to incoming user requests
  • processes the user input
  • queries an AI model (e.g., OpenAI's API)
  • decides between calling a database query node or a Wikipedia search node
  • returns the result to the user

In terms of n8n, this translates to the following steps:

  1. Add a Trigger node to listen to incoming user requests.

  2. Add the AI Agent node to process the user input and query an AI model.

  3. Connect the AI Agent node to a Chat model node as well as to a Database query node and a Wikipedia search node.

Let's start building our AI agent!

  1. Click on the + icon at the top left to create a new workflow.

  2. Then click on the + icon at the center to add a new node.

  3. In the search bar, type "chat" and select Chat Trigger. For now leave the default settings. Hit ESC to close the settings and return to your workflow. The chat node is now added.

    Note: The Chat Trigger node can hooked up to a public chat interface or to be used with the internal chat interface of n8n. For most cases you'd want to hook it up externally. Read more about how to that here: Chat Trigger node

  4. Next, add the AI Agent node. Search for "AI Agent" in the search bar and select the AI Agent node. In the next screen, you once again can define agent settings, like the agents system message or whether to include intermedia steps in the agents output. Select the latter and leave the other settings as they are:

    AI Agent settingsAI Agent settings

    Hit ESC again to come back to your workflow.

  5. Now it's time to connect our agent to it's AI model. Click on the + icon at the agents 'Chat model' connection and select the AI model provider you want to use. For this guide we're using OpenAI's GPT-4o model. On selecting the option, you need to enter your API key by clicking 'select credentials' and the setting menu. If it's the first time, you'll need to create a new credential by clicking on the 'Create new credential' button.

    Fill out the credentials form and hit 'Save'.

    AI Agent credentialsAI Agent credentials

    For the model settings, choose 'gpt-4o' as the model and leave the rest on default.

    AI model settingsAI model settings

    Hit ESC to return to your workflow, which should look as follows:

    AI Agent workflowAI Agent workflow

  6. Now we can already the test agent. While it does not have tools yet, we can check if the general workflow is able to use the users message and invoke the AI model. Click the 'Chat' button on the bottom of the screen and type in a message, like 'How are you?'. The agent should reply with something along the line that they are just software and therefore don't have feelings.

    What's also very nice: On the bottom right hand-side you get a detailed output of the messages sent to the AI model - so the fully detailed system and user messages are available for debugging.

    AI Agent chatAI Agent chat

  7. Ok let's add the tools: First, click on the + icon at the 'tools' connector of the AI agent node. Select the 'Wikipedia' tool. No settings to be changed, hit ESC to come back to your workflow. Ask a question which you assume can be answered by looking up wikipedia. Like 'How many people live in New York'. The agent should now reply with the number of people living in New York. More importantly, on the bottom right side you see detailed agent steps - showing exactly what the agent was doing. In my example we see that the agent first asked the AI model for a response, then used the wikipedia tool to look up the answer and finally used the AI model to format the answer.

    AI Agent tool useAI Agent tool use

  8. To add the second tool - our PostgreSQL database lookup, click on the + sign at the tool connector again (you can add as many tools as you like). Select the 'Postgres' tool. Similarly to the AI model creation, you need to enter/create your database credentials.

    Database credentialsDatabase credentials

    (Note: If you run your n8n instance in Docker, make sure to select the right host name for your database. In most cases, it's host.docker.internal or 172.17.0.1, if your database runs on the same machine as as n8n. You can check your docker networks with docker network ls and docker network inspect <network-name>.)

    As for the tool settings, we'd advise to use 'manual' tool description, and enter a good description for what the agent might find in your database. Something along the lines:

    1 Execute a PostgreSQL SELECT query and return the results.
    2 Available tables and their schemas:
    3
    4 users
    5 - id SERIAL PRIMARY KEY
    6 - email VARCHAR(255) NOT NULL
    7 - age INTEGER
    8 - location VARCHAR(100)
    9 - signup_date DATE NOT NULL
    10 - last_login TIMESTAMP WITH TIME ZONE
    11 - job_industry VARCHAR(100)
    12
    13 user_activity
    14 - id SERIAL PRIMARY KEY
    15 - user_id INTEGER REFERENCES users(id)
    16 - activity_date DATE NOT NULL
    17 - activity_type VARCHAR(50) NOT NULL

    For 'Operation', use 'Execute query', as we want the AI to determine the query.

    Important: For the 'query' setting, use {{ $fromAI('placeholder_name') }}. This placeholder will be replaced by the AI model with the query based on the users question. This way, the agent can dynamically query the database based on the user's input.

    Database settingsDatabase settings

    Hit ESC to return to your workflow.

  9. Test your agent again, by asking a question which can be answered by your database. In my case I have user data in the database. I asked 'How many users do we have'. On the right side you can see the detailed agent steps again. This time, it correctly used the PostgreSQL database tool to query for the answer.

    Database query tool
useDatabase query tool use

In principle we are done now. We have a fully functional AI agent which can use different tools to answer user questions. The agent is able to dynamically decide which tool to use based on the user input. This is a great blueprint for your own agents and can be easily extended with more tools and more complex workflows.

Now that you've seen how easy it is to assemble an agent with n8n, know that you can push the envelope even further. Additionally, if you're looking to enhance your agent's capabilities with intelligent text processing, our Document Extraction with GPT-4o tutorial provides valuable insights into extracting structured information with minimal code.

If you want to see in more details how to interact with your database using plain language, check out our Database Query with Natural Language guide.

However, one thing remains: Ask the agent the following question: 'My name is Andreas. How are you?', followed by 'Whats my name?'. The agent will not be able to answer the second question. This is because the agent does not have any conversation persistence memory whatsoever. Not even for the current chat session.

Adding conversation memory to n8n agent

To solve this conversation memory issue, click on the 'Memory' connector of the agent node. Select the 'Window buffer memory' option. Note that this is in-memory storage and will be lost when the n8n instance is restarted. The default memory settings are fine for most use-cases.

Memory settingsMemory settings

This memory setting is great for many conversational use-cases, where you don't need to store details of the conversation for a long time (which is the case for most chatbots). If you need a more complex memory option, you can use the Memory management node

Interested in how to train your very own Large Language Model?

We prepared a well-researched guide for how to use the latest advancements in Open Source technology to fine-tune your own LLM. This has many advantages like:

  • Cost control
  • Data privacy
  • Excellent performance - adjusted specifically for your intended use

Further Reading

More information on our managed RAG solution?
To Pondhouse AI
More tips and tricks on how to work with AI?
To our Blog