Chat Node Guide
The Chat Node provides an interactive conversational AI interface that bridges real-time chat experiences with automated workflow execution. Unlike Agent nodes that process single prompts, Chat nodes maintain persistent conversation history and can work both standalone and integrated into complex workflows.
Key Features
💬 Interactive Chat Interface
- Real-time message exchange with AI models
- Persistent chat history stored locally
- Message history survives page reloads and sessions
- Auto-scroll to latest messages with timestamps
- Support for all model types (OpenAI, Anthropic, local models)
🔗 Workflow Integration
- Works standalone (no connection warnings)
- Accepts string input from previous workflow nodes
- Configurable output modes for downstream processing
- Run button appears when connected to workflows
- Input validation with comprehensive error feedback
🧠 Memory Management
- Persistent conversation storage per node instance
- Configurable message limiting for workflow context
- Chat history available across workflow executions
- Memory-enabled agent behavior with conversation continuity
Configuration Options
Model Settings
- Model Selection: Choose from OpenAI GPT-4o, Claude, or local models
- Temperature: Control response creativity (0.0 - 1.0)
- System Prompt: Set AI behavior, personality, and context
- Provider/Endpoint: Configure local model connections
Workflow Settings
- Output Mode:
Response
(default): Pass only the latest AI response as stringFull History
: Pass entire conversation as structured array
- Message Limiting: Enable to control context size and performance
- Max Messages: Number of recent message pairs to include (default: 10)
Display Settings
- Detail Mode: Expanded chat interface with full conversation view
- Compact Mode: Minimal node display for clean workflow layouts
- Resizable: Adjustable width and height in detail mode
Usage Scenarios
1. Standalone Chat Interface
Perfect for:
- Customer support interfaces
- Interactive help systems and tutorials
- Conversation testing and AI development
- AI-powered chat applications
[Chat Node] (Standalone)
2. Workflow Integration
[HTTP Request] → [Chat Node] → [Action Node]
Connect Chat Nodes to workflows for automated processing:
- Previous node output becomes user message
- Chat Node processes with AI and responds
- Output passes to next node in workflow
- Maintains conversation memory across executions
3. Memory-Enabled Agent Processing
[Start] → [Chat Node] → [Condition] → [Loop Back]
Use as an intelligent agent with persistent memory:
- Accumulates knowledge across interactions
- Maintains context for complex multi-turn conversations
- Enables sophisticated conversation flows and iterative problem-solving
Output Formats
Response Mode (Default)
Returns the AI's latest response as a simple string:
"AI response text here"
Best for: Simple value passing to downstream nodes
Full History Mode
Returns the complete conversation as an array of message objects:
[
{
"role": "user",
"content": "User message",
"timestamp": 1640995200000
},
{
"role": "assistant",
"content": "AI response",
"timestamp": 1640995205000
}
]
Best for: When downstream nodes need conversation context
Workflow Execution
Input Processing
When connected to workflows, the Chat Node:
- Validates Input: Ensures previous node output is string type
- Processes Message: Treats input as user message in conversation
- AI Processing: Sends to configured model with chat history
- Generates Output: Based on output mode configuration
Run Button
A green Play button appears in the header when connected to workflows:
- Triggers workflow execution for this node
- Shows error badge if input validation fails
- Provides visual feedback during processing
Error Handling
- Input validation ensures string input type
- Model configuration errors shown in chat interface
- Network errors displayed as system messages
- Graceful fallback for API failures
Best Practices
Standalone Usage
- Set appropriate system prompts for desired AI behavior
- Use temperature settings to control response style and creativity
- Clear chat history when starting new conversation topics
- Monitor local storage usage for very long conversations
Workflow Integration
- Use Response mode for simple value passing between nodes
- Use Full History mode when downstream nodes need conversation context
- Enable message limiting for performance with long conversations
- Validate input types in upstream nodes to ensure string data
Performance Optimization
- Message Limiting: Enable for workflows with extended conversations
- Local Storage: Chat history stored efficiently in browser
- Context Management: Limit messages sent to AI for faster responses
- Model Selection: Choose appropriate models for use case complexity
Integration Examples
Customer Support Flow
[Webhook Trigger] → [Chat Node: Support] → [Condition: Escalate?] → [Action: Create Ticket]
Content Generation Pipeline
[Form Input] → [Chat Node: Writer] → [Chat Node: Editor] → [Action: Save Content]
Interactive Tutorial System
[Start] → [Chat Node: Tutor] → [Condition: Understanding?] → [Loop: Clarify]
Memory Management
Storage Architecture
- Local Storage: Conversations stored in browser for persistence
- Isolation: Each node maintains separate conversation history
- Data Structure: Array of message objects with timestamps
- Persistence: Survives page reloads, browser restarts, and sessions
Chat Message Structure
interface ChatMessage {
id: string // Unique message ID
role: 'user' | 'assistant' | 'system'
content: string // Message text
timestamp: number // Unix timestamp
metadata?: any // Additional data
}
Memory Best Practices
- Each node instance maintains separate chat history
- Clear history manually when needed to reset context
- Monitor conversation length for token usage optimization
- Use message limiting for long-running workflow integrations
Troubleshooting
Common Issues
Chat not responding:
- Check model configuration and API keys in settings
- Verify network connectivity for local model endpoints
- Review browser console for API errors and warnings
- Ensure valid model ID selection
Messages not persisting:
- Check browser storage permissions and availability
- Verify sufficient storage space and quota limits
- Clear browser cache if data appears corrupted
- Check for storage quota restrictions in browser settings
Workflow execution fails:
- Ensure input from previous node is string type
- Check Chat node configuration for required fields
- Verify model endpoints are accessible and responding
- Review error messages in node status badge
Performance issues:
- Enable message limiting for long conversations
- Choose appropriate model for complexity and speed requirements
- Clear old chat histories periodically to free storage
- Monitor token usage for cost optimization
Debugging Tips
- Check Network tab for API request/response details
- Monitor Console for error messages and warnings
- Use node status badges for execution state information
Advanced Features
Template Variables
Use {{variable}}
syntax in system prompts to reference workflow data:
You are a helpful assistant for {{user.name}}.
The current task is: {{input.task}}
Previous context: {{previousNode.output}}
Custom Model Endpoints
Configure local or custom model endpoints:
Provider: local
Endpoint: http://localhost:8080/v1/chat/completions
Model: custom-model-name
Conversation Analytics
Monitor chat performance and usage:
- Message count and frequency
- Token usage per conversation
- Response time metrics
- User engagement patterns
Migration from Agent Nodes
Chat Nodes can replace Agent Nodes when conversation memory is needed:
Similarities
- Similar model configuration options
- Compatible workflow input/output patterns
- Same AI provider support and authentication
- Equivalent performance and capabilities
Enhancements
- Persistent Memory: Conversation history across executions
- Interactive Interface: Real-time chat experience
- Memory Management: Configurable context size controls
- Standalone Capability: Works without workflow connections
Migration Steps
- Replace Agent node with Chat node in workflow
- Copy over model configuration and system prompt
- Configure output mode based on downstream node requirements
- Test workflow execution with conversation memory enabled
Related Documentation
- Node Reference Overview - Complete guide to all node types
- Agent Node Guide - Single-prompt AI processing
- Code Node Guide - Custom JavaScript execution
- Template Variables - Dynamic value replacement
- Workflow Building - Creating complex automations
- Environment Variables - Secure configuration storage