With the introduction of native guardrail nodes in n8n, you can now easily enforce robust data safety measures in your AI workflows. These nodes help you prevent sensitive data leaks, flag inappropriate outputs, and sanitize user input before sending it to AI models or external destinations. In this tutorial, you’ll learn how to use n8n guardrail nodes step by step, with practical examples and configuration tips to secure your automations.

Prerequisites

  • Access to Nad (version 1.119 or later)
  • Basic understanding of Nad workflows and nodes
  • Sample workflow with text input/output (e.g., email, Slack, or API data)

Step 1: Understanding Guardrail Nodes

Guardrail nodes are designed to enforce rules on text flowing in and out of your workflows. These help you block, flag, or sanitize data automatically, adding an essential layer of safety to AI-powered automation.

Types of Guardrails

  • Keywords: Block specific words or phrases you define.
  • Jailbreak: Detects prompt injection or exploit attempts.
  • NSFW: Flags not-safe-for-work content in messages.
  • Personal Data (PII): Detects emails, credit cards, SSNs, and other sensitive data.
  • Secret Keys: Flags API keys, passwords, or other credentials.
  • Topical Alignment: Ensures content stays within a defined topic or scope.
  • URLs: Allows or blocks specific URLs or schemas (e.g., only HTTPS links).
  • Custom: Create your own guardrail using prompts or regex rules.

Step 2: Updating Nad to Access Guardrails

  1. Ensure your n8n app is updated to version 1.119 or later.
  2. In the workflow editor, search for guard to access new guardrail nodes.

Step 3: Using ‘Check Text for Violations’

This action analyzes text and flags or blocks content that violates configured guardrails, using AI for intelligent detection.

  1. Add the Check Text for Violations node to your workflow.
  2. Set up guardrails by choosing the relevant types from the list above (you can use multiple at once).
  3. Configure parameters such as keywords, thresholds, business scope, or allowed URLs depending on the guardrail type.
  4. Feed in any text input (e.g., email, chat message) as a variable.
  5. Handle pass/fail outputs:
    • On pass, let the workflow proceed (e.g., send a message or log data).
    • On fail, trigger actions like notifications, errors, or custom logic.

Example: Blocking Sensitive Keywords

// Block the keywords ‘password’ and ‘system’ in text
{“keywords”: “password, system”}

If the text contains these keywords, the node will flag or block them.

Example: Jailbreak Detection

// Detect prompt injection attempts
{“guardrail_type”: “jailbreak”, “threshold”: 0.9}

Customize threshold for confidence: 0 (safe) to 1 (risky).

Example: NSFW Content Filtering

// Block content labeled as NSFW
{“guardrail_type”: “nsfw”, “threshold”: 0.8}

Example: Blocking URLs Except Allowed Domains

// Only allow https URLs from upai.com
{“allowed_domains”: [“upai.com”], “allowed_schemes”: [“https”]}

You can stack multiple guardrails in a single node for complex scenarios.

Step 4: Using ‘Sanitize Text’

Unlike the previous action, Sanitize Text does not send content to AI. It removes or sanitizes sensitive elements (like PII, keys, or URLs) using pattern matching before your data ever reaches an AI model.

  1. Add the Sanitize Text node to your workflow.
  2. Choose what to sanitize:
    • Personal Data (PII): Emails, addresses, phone numbers, SSNs, etc.
    • Secret Keys: API keys, credentials.
    • URLs: Any URL in the text.
    • Custom Regex: Any pattern you define (e.g., phone numbers, custom codes).
  3. Drag your text input into the node and run the workflow.

Example: Sanitize PII

// Input: “My phone number is 123-456-7890.”
// Output: “My phone number is [PII-REMOVED].”

Example: Sanitize Secret Keys

// Input: “My API key is sk-xxxx-yyyy-zzz.”
// Output: “My API key is [KEY-REMOVED].”

Example: Block All URLs

// Input: “Visit https://upai.com for more information.”
// Output: “Visit [URL-REMOVED] for more information.”

Example: Custom Regex Sanitize

// Use regex to sanitize custom patterns
{“custom_regex”: “[A-Z]{3}-[0-9]{4}”}

Tips & Best Practices

  • Stack multiple guardrails for comprehensive protection.
  • Customize prompts and thresholds for your organizational needs.
  • Sanitize before AI processing whenever possible to minimize sensitive data exposure.
  • Test workflows thoroughly with both expected and unexpected text inputs.

Conclusion

n8n  native guardrail nodes give you powerful, flexible options for securing data in your AI and automation workflows. Start by updating n8n, try out the check and sanitize actions, and stack or customize guardrails as needed. For more in-depth workflows and templates, check n8n resources or community

Key takeaway: Configure guardrails early in your workflow to ensure that data privacy and compliance are always central to your AI automations.