Skip to main content

Workflow Design Principles

Start Simple, Then Expand

Begin with a minimal viable workflow that solves the core problem. Test it thoroughly, then add enhancements incrementally. Example progression:
  1. Basic workflow: Trigger → Agent → Action
  2. Add error handling: Include notifications on failure
  3. Add optimization: Implement caching or conditional execution
  4. Add monitoring: Track metrics and performance

One Workflow, One Purpose

Keep each workflow focused on a single, well-defined task. If a workflow becomes too complex, split it into multiple workflows. Good examples:
  • ✅ “Process New Support Tickets”
  • ✅ “Daily Sales Report Generator”
  • ✅ “Customer Onboarding Automation”
Avoid:
  • ❌ “Handle All Customer Interactions and Generate Reports”
  • ❌ “Universal Business Process Automation”

Design for Failure

Assume that external services will occasionally fail. Build resilience into your workflows:
  • Add error handling to critical nodes
  • Include fallback paths for important decisions
  • Set appropriate timeouts
  • Use notifications to alert on failures

Naming Conventions

Workflow Names

Use clear, action-oriented names that describe what the workflow does:
  • ✅ “Process Customer Feedback Forms”
  • ✅ “Generate Weekly Analytics Report”
  • ❌ “Workflow 1”
  • ❌ “My Test”

Node Names

Give each node a descriptive name that explains its specific purpose: Good examples:
  • “Extract Customer Data from Email”
  • “Check if Order Amount Exceeds $1000”
  • “Notify Sales Team of High-Value Lead”
Avoid generic names:
  • ❌ “Agent 1”
  • ❌ “HTTP Request”
  • ❌ “Condition”

Variable Names

When creating variables (like loop variables or output aliases), use:
  • snake_case for variables: customer_email, total_amount
  • camelCase for object properties: user.firstName, order.totalPrice
  • Descriptive names that indicate content: approved_requests, not list1

Testing Strategies

Test Each Node Independently

Before running the full workflow, test each node individually:
  1. Click the play button on each node
  2. Verify the output is correct
  3. Check for errors or warnings
  4. Validate the data structure

Test with Real-World Data

Use actual examples from your production environment:
  • Real form submissions (anonymized if needed)
  • Actual API responses
  • Representative data volumes
  • Edge cases and unusual inputs

Create Test Scenarios

Document and test different paths through your workflow:
  • Happy path: Everything works as expected
  • Error conditions: What happens when APIs fail?
  • Edge cases: Empty data, very large inputs, special characters
  • Boundary conditions: Maximum/minimum values

Use Version Control

Before making significant changes to a production workflow:
  1. Test changes in the draft version thoroughly
  2. Document what changed in the version description
  3. Publish as a new version
  4. Monitor the first few runs carefully
  5. Keep the previous version available for rollback

Cost Optimization

Choose the Right Model

Use the smallest model that achieves your goals:
  • Simple tasks (categorization, extraction): Use faster, cheaper models
  • Complex reasoning: Use more capable models
  • Structured output: Consider models optimized for JSON

Minimize Agent Calls

AI agents are the most expensive nodes. Optimize their use:
  • Batch processing: Process multiple items in one agent call when possible
  • Caching: Don’t re-analyze the same content
  • Conditional execution: Only call agents when necessary
  • Prompt efficiency: Write clear, concise prompts
Example - Inefficient:
Loop over 100 customer records
  → Agent: Analyze each customer (100 agent calls)
Better:
Code: Batch customers into groups of 10
Loop over 10 groups
  → Agent: Analyze batch of 10 customers (10 agent calls)

Set Spending Limits

Configure cost controls in workflow settings:
  • Monthly limit: Cap total spending per month
  • Per-execution limit: Prevent runaway costs from a single run
  • Alert thresholds: Get notified at 50%, 75%, 90% of limit

Monitor Usage

Regularly review the Usage tab to identify optimization opportunities:
  • Which nodes consume the most credits?
  • Are there redundant AI calls?
  • Can you cache results or use cheaper alternatives?

Error Handling

Configure Node-Level Error Handling

For each critical node, decide how to handle failures: Fail workflow: Use when the error makes continuation impossible
Example: Payment processing failed → stop workflow
Continue workflow: Use when errors are acceptable or non-critical
Example: Notification failed → log error but continue saving data
Error callback: Route to alternative nodes on error
Example: API unavailable → use backup API or queue for retry

Add Validation Early

Validate inputs at the start of your workflow:
// In a Code Node right after the trigger
if (!trigger.email || !trigger.email.includes("@")) {
  throw new Error("Invalid email address provided");
}

if (!trigger.amount || trigger.amount <= 0) {
  throw new Error("Amount must be greater than zero");
}

return trigger;

Use Try-Catch in Code Nodes

Wrap risky operations in try-catch blocks:
try:
    # Attempt the operation
    result = complex_calculation(trigger.data)
    return {"success": True, "result": result}
except ValueError as e:
    # Handle specific errors gracefully
    return {"success": False, "error": str(e)}
except Exception as e:
    # Log unexpected errors
    print(f"Unexpected error: {str(e)}")
    return {"success": False, "error": "An unexpected error occurred"}

Notify on Critical Failures

Add a notification node on error paths for important workflows:
Workflow Failed:
{{workflow.name}}

Node:
{{failed_node.name}}
Error:
{{error.message}}

Run ID:
{{run.id}}
Timestamp:
{{run.timestamp}}

Data Handling

Validate Data Structure

Don’t assume data will always be in the expected format:
# Check if data exists and has expected structure
items = trigger.get("items", [])
if not isinstance(items, list):
    return {"error": "Expected items to be a list"}

# Safely access nested properties
for item in items:
    price = item.get("price", 0)
    quantity = item.get("quantity", 1)
    # Process with defaults

Use Structured Outputs

For agent nodes, always define structured output schemas when you need reliable data: Why?
  • Guarantees data format
  • Prevents parsing errors
  • Makes downstream nodes more reliable
  • Easier to debug
Example: Instead of parsing text like “The sentiment is positive and priority is high”, use:
{
  "sentiment": "positive",
  "priority": "high"
}

Handle Missing Data

Provide sensible defaults for optional fields:
const customerName = trigger.name || "Unknown Customer";
const priority = analysis.priority || "medium";
const tags = trigger.tags || [];

Sanitize User Input

Clean and validate data from forms and external sources:
import re

def sanitize_email(email):
    # Remove whitespace and convert to lowercase
    email = email.strip().lower()
    # Basic email validation
    if not re.match(r'^[\w\.-]+@[\w\.-]+\.\w+$', email):
        raise ValueError("Invalid email format")
    return email

clean_email = sanitize_email(trigger.email)

Performance Optimization

Parallelize When Possible

When nodes don’t depend on each other, they can run in parallel: Sequential (slower):
Trigger → API Call 1 → API Call 2 → API Call 3 → Continue
Parallel (faster):
              → API Call 1 →
Trigger →    → API Call 2 →    → Continue
              → API Call 3 →

Use Code for Simple Transformations

Don’t use an AI agent for tasks that can be done with simple code: Inefficient:
Agent: "Convert this date to ISO format"
Better:
from datetime import datetime
date_string = trigger.date
iso_date = datetime.strptime(date_string, "%m/%d/%Y").isoformat()
return {"date": iso_date}

Limit Loop Iterations

Always set maximum iteration limits on loops:
Max Iterations: 100
This prevents:
  • Accidental infinite loops
  • Runaway costs
  • Performance degradation

Cache Expensive Operations

If you’re processing the same data multiple times, cache results:
# In a code node
cache = {}

def get_user_data(user_id):
    if user_id in cache:
        return cache[user_id]

    # Expensive API call
    data = fetch_from_api(user_id)
    cache[user_id] = data
    return data

Security Best Practices

Protect Sensitive Data

  • Never expose API keys or credentials in node configurations that might be shared
  • Use connection management for authentication
  • Avoid logging sensitive information (passwords, credit cards, etc.)

Validate External Input

Always validate data from webhooks and forms:
# Whitelist allowed values
allowed_priorities = ["low", "medium", "high"]
if trigger.priority not in allowed_priorities:
    return {"error": "Invalid priority value"}

# Validate data types
if not isinstance(trigger.amount, (int, float)):
    return {"error": "Amount must be a number"}

# Check ranges
if trigger.amount < 0 or trigger.amount > 1000000:
    return {"error": "Amount out of valid range"}

Limit Public Form Access

For public forms:
  • Only collect necessary information
  • Add rate limiting if available
  • Use CAPTCHA for high-value forms
  • Monitor for abuse

Use Least Privilege

When sharing workflows:
  • Give team members the minimum access they need
  • Use viewer access for people who only need to monitor
  • Reserve editor access for workflow builders

Maintenance and Monitoring

Document Complex Logic

Add comments to explain non-obvious decisions:
# We check orders from the last 30 days because older orders
# are handled by the legacy system and shouldn't be processed here
thirty_days_ago = datetime.now() - timedelta(days=30)
recent_orders = [o for o in orders if o.date > thirty_days_ago]

Monitor Key Metrics

Track important metrics for your workflows:
  • Success rate: % of runs that complete without errors
  • Average execution time: How long runs typically take
  • Cost per run: Credits consumed per execution
  • Error types: Common failure patterns

Set Up Alerts

Configure notifications for:
  • Workflow failures (especially critical workflows)
  • Cost thresholds being exceeded
  • Unusual execution times
  • High error rates

Regular Reviews

Schedule periodic reviews of your workflows:
  • Monthly: Review costs and usage patterns
  • Quarterly: Update instructions and logic for accuracy
  • After changes: Test thoroughly before and after updates
  • Annually: Consider if the workflow is still needed

Common Patterns

Approval Workflows

Trigger → Agent (analyze) → Condition (needs approval?)
  → Yes: Send Notification → Wait for response
  → No: Execute Action

Data Enrichment Pipeline

Trigger → HTTP (fetch data) → Agent (enrich/analyze) →
Code (transform) → Action (save)

Batch Processing

Trigger → HTTP (get list) → Loop (each item) →
  Agent (process) → Action (save) → Loop End → Notification (summary)

Error Recovery

Action → [Success] → Continue
       → [Error] → Code (log) → Notification → Alternative Action

Checklist for Production Workflows

Before activating a workflow in production, verify:
  • All nodes have descriptive names
  • Test runs completed successfully
  • Error handling configured for critical nodes
  • Cost limits set appropriately
  • Variables use clear naming conventions
  • Sensitive data is protected
  • Form validation in place (if using forms)
  • Team members notified of failures
  • Documentation/comments added for complex logic
  • Success/failure scenarios tested
  • Monitoring alerts configured

Next Steps