Workflow System
ExtendedLM Workflow System provides visual workflow creation with drag-and-drop nodes, real-time execution, and API triggers.
Workflows are automated sequences of tasks combining LLM calls, tool executions, conditional logic, and loops. Create complex AI pipelines visually without code.
Key Features
- Visual Editor: ReactFlow-based drag-and-drop interface
- 7 Node Types: Start, LLM, Tool, Code, Condition, Loop, End
- Real-time Execution: SSE-based progress updates
- Variable System: Pass data between nodes
- API Triggers: Execute workflows via HTTP/webhook
- Undo/Redo: Full history management
- Export/Import: JSON workflow persistence
Use Cases
- Document processing pipelines
- Data extraction and transformation
- Multi-step AI agents
- Automated report generation
- API integration workflows
- Scheduled tasks
Visual Workflow Editor
Location: src/workflow/visual-workflow/
Page: /workflow
Components
- VisualWorkflowBuilder: Main editor component
- WorkflowCanvas: ReactFlow canvas
- WorkflowToolbox: Node palette
- NodePropertiesPanel: Node configuration
- NodeExecutionPanel: Execution monitoring
Creating a Workflow
- Open
/workflowpage - Click "New Workflow"
- Drag nodes from toolbox to canvas
- Connect nodes by dragging edges
- Configure node properties in right panel
- Click "Save Workflow"
- Click "Execute" to run
Node Palette
Available nodes in the toolbox:
Start
Workflow entry point
LLM
Language model call
Tool
Execute tool/function
Code
Custom JavaScript
Condition
If/else branching
Loop
Iteration
End
Workflow termination
Canvas Controls
- Zoom: Mouse wheel or +/- buttons
- Pan: Click and drag on canvas
- Select: Click node or edge
- Multi-select: Shift + click
- Delete: Select + Delete key
- Undo/Redo: Ctrl+Z / Ctrl+Y
Connection Validation
The editor validates connections to prevent invalid workflows:
- Start node must be first
- End node must be last
- No cycles (except Loop nodes)
- Type-compatible connections
Workflow Engine
File: src/bringup/executor/WorkflowEngine.ts
Execution Model
Workflows execute sequentially, node by node:
- Start at Start node
- Execute node logic
- Pass output to connected nodes
- Repeat until End node
- Return final result
Error Handling
The engine provides graceful error handling:
- Node Errors: Caught and logged
- Abort Signals: Cancellable execution
- Retry Logic: Configurable retries (future)
Progress Tracking
Real-time execution updates via Server-Sent Events:
// Subscribe to execution events
const eventSource = new EventSource('/api/workflows/execute-visual')
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data)
switch (data.type) {
case 'node-start':
console.log(`Starting node: ${data.nodeId}`)
break
case 'node-complete':
console.log(`Completed node: ${data.nodeId}`)
break
case 'workflow-complete':
console.log('Workflow done!')
break
}
}
Start Node
Type: start
Purpose: Entry point for workflow execution.
Configuration
- Workflow Name: Display name
- Description: Optional description
- Inputs: Define workflow input parameters
Input Parameters
{
"inputs": [
{
"name": "user_query",
"type": "string",
"required": true,
"description": "User's question"
},
{
"name": "context",
"type": "string",
"required": false,
"description": "Additional context"
}
]
}
Outputs
Start node outputs are available as variables:
input.user_queryinput.context
LLM Node
Type: llm
Purpose: Invoke language model for text generation.
Configuration
- Model: Select from available models
- System Prompt: Instruction for the model
- User Message: Dynamic message (supports variables)
- Temperature: 0.0 - 2.0 (creativity)
- Max Tokens: Maximum response length
Variable Interpolation
Use {{variable}} syntax to inject values:
System Prompt: You are a helpful assistant.
User Message: {{input.user_query}}
Context: {{prev_node.output}}
Example Configuration
{
"type": "llm",
"config": {
"model": "openai:gpt-4o",
"systemPrompt": "You are a document summarizer.",
"userMessage": "Summarize: {{input.document}}",
"temperature": 0.3,
"maxTokens": 500
}
}
Outputs
this.output- LLM response textthis.usage- Token usage statistics
Tool Node
Type: tool
Purpose: Execute predefined tools or functions.
Available Tools
- vectorQueryTool: RAG document search
- weatherTool: Weather information
- httpRequestTool: HTTP API calls
- Custom Tools: User-defined functions
Configuration
- Tool: Select tool from dropdown
- Parameters: Tool-specific inputs
Example: HTTP Request Tool
{
"type": "tool",
"config": {
"tool": "httpRequestTool",
"parameters": {
"url": "https://api.example.com/data",
"method": "POST",
"body": {
"query": "{{input.search_term}}"
},
"headers": {
"Authorization": "Bearer {{env.API_KEY}}"
}
}
}
}
Outputs
this.output- Tool execution resultthis.error- Error message (if failed)
Code Node
Type: code
Purpose: Execute custom JavaScript code for data transformation.
Configuration
- Code: JavaScript code editor
- Inputs: Variables available in code
Available Variables
input- Workflow inputscontext- Execution contextnodes- Access previous node outputs
Example: Data Transformation
// Extract names from LLM output
const text = nodes.llm_node_1.output
const names = text.match(/\b[A-Z][a-z]+ [A-Z][a-z]+\b/g)
return {
names: names || [],
count: names ? names.length : 0
}
Example: API Response Processing
// Parse and filter API response
const data = JSON.parse(nodes.http_request.output)
const filtered = data.items
.filter(item => item.score > 0.8)
.map(item => ({
title: item.name,
summary: item.description.substring(0, 100)
}))
return { results: filtered }
Outputs
Return value from code becomes node output:
this.output- Returned object/value
Condition Node
Type: condition
Purpose: Conditional branching (if/else).
Configuration
- Condition: JavaScript expression (must return boolean)
- True Branch: Connect to node for true case
- False Branch: Connect to node for false case
Condition Expression
// Check if LLM response contains keyword
nodes.llm_node.output.includes("ERROR")
// Check numeric value
nodes.code_node.output.score > 0.5
// Check array length
nodes.tool_node.output.results.length > 0
// Complex condition
nodes.llm_node.output.length > 100 &&
nodes.sentiment.output.sentiment === "positive"
Example Configuration
{
"type": "condition",
"config": {
"condition": "nodes.rag_search.output.results.length > 0"
},
"connections": {
"true": "process_results_node",
"false": "no_results_found_node"
}
}
Outputs
this.branch- "true" or "false"this.condition_result- Boolean result
Loop Node
Type: loop
Purpose: Iterate over arrays or repeat until condition.
Loop Types
- For Each: Iterate over array items
- While: Repeat while condition is true
- Count: Repeat N times
Configuration (For Each)
{
"type": "loop",
"config": {
"loopType": "forEach",
"array": "{{nodes.data_fetch.output.items}}",
"itemVariable": "current_item"
}
}
Configuration (While)
{
"type": "loop",
"config": {
"loopType": "while",
"condition": "nodes.counter.output.value < 10",
"maxIterations": 100
}
}
Loop Body
Nodes inside loop have access to:
loop.index- Current iteration (0-based)loop.item- Current item (for each)loop.count- Total iterations so far
Outputs
this.iterations- Number of iterations executedthis.results- Array of iteration results
End Node
Type: end
Purpose: Workflow termination with result.
Configuration
- Output: Select which node's output to return
- Transform: Optional final transformation
Example
{
"type": "end",
"config": {
"output": "{{nodes.final_llm.output}}",
"metadata": {
"workflow_name": "Document Processor",
"execution_time": "{{sys.execution_time}}"
}
}
}
Workflow Result
The End node's output becomes the workflow result:
{
"status": "success",
"output": "...",
"metadata": {
"workflow_id": "wf_123",
"execution_time_ms": 1234
}
}
Variable System
Variable Types
- System Variables:
sys.* - Workflow Inputs:
input.* - Node Outputs:
nodes.node_id.* - Environment Variables:
env.* - Loop Variables:
loop.*
System Variables
sys.user_id # Current user ID
sys.workflow_id # Workflow ID
sys.execution_id # Execution ID
sys.timestamp # Current timestamp
sys.execution_time # Time since start (ms)
Accessing Node Outputs
nodes.llm_summary.output # LLM response
nodes.rag_search.output.results # RAG results array
nodes.data_transform.output.count # Processed count
Variable Interpolation
Use {{variable}} in node configurations:
System Prompt: Process the following: {{input.document}}
API URL: https://api.example.com/user/{{sys.user_id}}
Condition: nodes.score.output.value > {{env.THRESHOLD}}
Variable Picker
The editor provides auto-complete for variables when editing node configurations.
Workflow Execution
Execute from UI
- Open workflow in editor
- Click "Execute" button
- Provide input values (if required)
- Monitor progress in execution panel
- View results when complete
Execute via API
curl -X POST http://localhost:3000/api/workflows/execute-visual \
-H "Content-Type: application/json" \
-d '{
"workflowId": "wf_abc123",
"inputs": {
"user_query": "Summarize this document",
"document": "..."
}
}'
Streaming Execution
Server-Sent Events provide real-time updates:
const eventSource = new EventSource(
'/api/workflows/execute-visual?workflowId=wf_abc123'
)
eventSource.addEventListener('node-start', (e) => {
const data = JSON.parse(e.data)
console.log(`Node ${data.nodeId} started`)
})
eventSource.addEventListener('node-complete', (e) => {
const data = JSON.parse(e.data)
console.log(`Node ${data.nodeId} completed`)
})
eventSource.addEventListener('workflow-complete', (e) => {
const result = JSON.parse(e.data)
console.log('Done:', result.output)
eventSource.close()
})
Abort Execution
curl -X POST http://localhost:3000/api/workflows/abort \
-H "Content-Type: application/json" \
-d '{"executionId": "exec_123"}'
Webhook Webhook & API Triggers
API Triggers
Webhook Registry
File: generated-workflows/webhook-registry.json
Register workflows for HTTP/webhook triggers:
{
"webhooks": [
{
"workflowId": "wf_abc123",
"path": "/webhook/document-processor",
"method": "POST",
"auth": {
"type": "bearer",
"token": "secret_token_123"
}
}
]
}
Trigger via Webhook
curl -X POST http://localhost:3000/webhook/document-processor \
-H "Authorization: Bearer secret_token_123" \
-H "Content-Type: application/json" \
-d '{
"document": "This is a document to process."
}'
Scheduled Execution (Future)
Configure cron-based scheduling:
{
"schedule": {
"cron": "0 9 * * *",
"timezone": "UTC",
"enabled": true
}
}
Workflow API Reference
List Workflows
Endpoint: GET /api/workflows/list
{
"workflows": [
{
"id": "wf_abc123",
"name": "Document Processor",
"description": "Process documents",
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-01-02T00:00:00Z"
}
]
}
Create Workflow
Endpoint: POST /api/workflows/create
{
"name": "My Workflow",
"description": "Description",
"nodes": [...],
"edges": [...]
}
Update Workflow
Endpoint: PUT /api/workflows/edit
{
"workflowId": "wf_abc123",
"name": "Updated Name",
"nodes": [...],
"edges": [...]
}
Delete Workflow
Endpoint: DELETE /api/workflows/delete
{
"workflowId": "wf_abc123"
}
Execute Workflow
Endpoint: POST /api/workflows/execute-visual
{
"workflowId": "wf_abc123",
"inputs": {
"param1": "value1"
}
}
Execute Single Node (Debug)
Endpoint: POST /api/workflows/execute-node
{
"node": {
"type": "llm",
"config": {...}
},
"context": {...}
}
Abort Workflow
Endpoint: POST /api/workflows/abort
{
"executionId": "exec_123"
}