FlowEngine

⚡ Enterprise Integration Platform

Build. Deploy. Monitor.
Enterprise integrations made simple

Visual flow-based programming with 50+ node types, dual runtime modes, and real-time monitoring.

No vendor lock-in. Fully customizable. Built in-house for complete control.

Explore Features View Architecture
50+
Node Types
2
Runtime Modes
100%
Customizable

Quick Navigation

đŸ—ī¸ Architecture ✨ Features 🤖 MCP Server 📊 Monitoring 🔌 Connections 🔄 Subflows ⚡ Functions â›“ī¸ Chain Flows đŸ› ī¸ Technology

What is FlowEngine?

FlowEngine is an enterprise-grade visual flow-based integration platform that enables you to build, deploy, and monitor complex data workflows without writing extensive code.

With a powerful drag-and-drop editor, 50+ node types, dual runtime modes (Worker Thread and Docker), and comprehensive monitoring capabilities, FlowEngine simplifies enterprise integrations while giving you complete control and zero vendor lock-in.

Powerful Features

Everything you need to build enterprise-grade integrations without writing complex code

🎨

Visual Flow Editor

Drag-and-drop interface powered by React Flow. Build complex workflows visually with real-time validation and live debugging.

⚡

Dual Runtime Modes

Choose between Worker Thread (lightweight) or Docker Container (isolated) runtimes. Hot-redeploy without downtime.

🔌

50+ Node Types

Comprehensive node library covering databases, messaging, HTTP, file operations, transformations, and custom functions.

🔄

Subflows & Reusability

Create reusable subflows that can be shared across projects. Build once, use everywhere with nested subflow support. See dedicated section below for details.

â›“ī¸

Flow Chaining

Chain multiple flows together for complex workflows. Automatic state management and resource balancing between flows.

📊

Real-Time Analytics

Comprehensive metrics tracking with execution history, performance monitoring, and analytics dashboards. Track CPU, memory, cycle times, and message flows.

🐛

Live Debugging

Real-time debug panel and live logs with flow-specific filtering. See exactly what's happening in your flows as they execute.

💾

Flow Storage

Shared storage system for passing data between unconnected nodes. Perfect for queue patterns and state management.

🔐

Connection Management

Centralized connection management for databases, MQTT, WebSocket, SFTP, Redis, and Kafka. Secure credential storage.

📁

File Processing

Upload and parse CSV/JSON files. Built-in support for file operations with automatic format detection.

âš™ī¸

Custom Functions

Execute JavaScript and Python code within flows. Full access to Node.js ecosystem and Python libraries.

🚀

High Performance

Bulk write operations, message queuing, concurrency control, and optimized execution engine for enterprise-scale workloads.

📚

Version Control

Automatic version snapshots on every save and deploy. Track changes, view history, and see what changed between versions with change summaries.

⚡

Easy Deployment

One-click deploy with hot-redeploy. Update flows in production instantly without downtime. Real-time deployment status tracking via WebSocket.

â†Šī¸

Easy Rollbacks

Rollback to any previous version with a single click. Automatic pre-rollback snapshots ensure you can always recover. View full version history.

System Architecture

How FlowEngine orchestrates flows across multiple runtime environments

đŸŗ

Docker Runtime

Isolated containers with resource limits

Flow Container 1
✓ CPU: 2 cores
✓ Memory: 512MB
Flow Container 2
✓ CPU: 1 core
✓ Memory: 256MB
⚡

FlowEngine

Core Orchestration Engine

Node Registry
50+
Active Flows
∞
âš™ī¸

Worker Runtime

Lightweight thread-based execution

Worker Thread 1
✓ Low latency
✓ Shared memory
Worker Thread 2
✓ Fast startup
✓ Resource efficient
📊
Metrics Collection
CPU, Memory, Cycle Time
🔌
Connection Manager
DB, MQTT, WebSocket, etc.
🚀
Hot Redeploy
Zero-downtime updates
🐛
Live Debugging
Real-time flow inspection

By The Numbers

50+
Node Types
2
Runtime Modes
10+
Database Types
100%
Customizable

Real-Time Monitoring & Analytics

Configure what to monitor and get insights into your flow performance

âš™ī¸ Configure Monitoring

đŸ’ģ CPU Usage
Enabled
âš ī¸ Warning: 70% | 🔴 Critical: 90%
Duration: 30 seconds
💾 Memory Usage
Enabled
âš ī¸ Warning: 70% | 🔴 Critical: 90%
Duration: 30 seconds
âąī¸ Cycle Time
Enabled
âš ī¸ Warning: 1000ms | 🔴 Critical: 5000ms
Duration: 30 seconds
❌ Error Rate
Enabled
âš ī¸ Warning: 10% | 🔴 Critical: 25%
Duration: 60 seconds

📊 Analytics Dashboard

Memory Usage
45.2 MB
↓ 12% from last hour
CPU Usage
23.5%
↓ 5% from last hour
Cycle Time Trend
Messages Flow (Today)
1,234
Received
987
Sent

Comprehensive Node Library

Everything you need for enterprise integrations, all in one platform

Core Nodes

  • Inject (Manual/Auto)
  • Debug Output
  • No-op Pass-through

API & HTTP

  • HTTP Trigger (REST API)
  • HTTP Request (Outbound)
  • HTTP Response

Messaging

  • MQTT In/Out
  • WebSocket In/Out
  • Socket.IO In/Out
  • Kafka/Redpanda

Databases

  • MongoDB (Find/Insert/Update/Delete)
  • PostgreSQL Support
  • MySQL Support
  • Aggregation Pipelines
  • Bulk Write Operations

Data Processing

  • JavaScript Functions
  • Python Functions
  • Transform Messages
  • JSON Parse/Stringify
  • Template Engine

Control Flow

  • Switch/Conditional Routing
  • Delay/Timing
  • For Loops
  • Change Properties

File Operations

  • File Upload (CSV/JSON)
  • CSV Parser
  • SFTP List/Download/Upload
  • SFTP Delete

Advanced Features

  • Subflows (Reusable)
  • Flow Chaining
  • Redis Pub/Sub
  • Flow Storage (Queue Pattern)
  • Global Variables
  • Error Handling

Why FlowEngine?

The perfect enterprise integration tool for modern businesses

đŸĸ

Enterprise Ready

Built for scale with resource limits, monitoring, alerting, and comprehensive error handling. Production-grade from day one.

🔧

Fully Customizable

In-house developed means complete control. Add custom nodes, extend functionality, and adapt to your exact needs.

⚡

High Performance

Optimized execution engine with bulk operations, message queuing, and efficient resource utilization.

đŸ›Ąī¸

Secure & Isolated

Docker runtime provides complete isolation. Resource limits prevent resource exhaustion. Secure credential management.

📈

Real-Time Monitoring

Comprehensive metrics, analytics dashboards, execution history, and performance tracking. Know exactly what's happening.

🔄

Zero Downtime

Hot-redeploy flows without restarting. Update flows in production without service interruption.

Connection Management

Centralized connection management for databases, messaging, and external services

💾

Databases

  • ▸ MongoDB
  • ▸ PostgreSQL
  • ▸ MySQL
  • ▸ Redis
📡

Messaging

  • ▸ MQTT
  • ▸ WebSocket
  • ▸ Socket.IO
  • ▸ Kafka/Redpanda
📁

File Storage

  • ▸ SFTP
  • ▸ Local File System

Node Registry Structure

âš™ī¸
Core Nodes
Inject, Debug, No-op
💾
Database Nodes
Find, Insert, Update, Delete, Bulk Write
📡
Messaging Nodes
MQTT, WebSocket, Socket.IO, Kafka
⚡
Function Nodes
JavaScript, Python
🔄
Control Nodes
Switch, Delay, Loop, Change
📄
File Nodes
Upload, CSV Parse, SFTP

Function Nodes: JavaScript & Python

Execute custom code with full access to Node.js and Python ecosystems

đŸŸĸ

JavaScript Functions

Full Node.js runtime access

// Access message payload
const
data = msg.payload;

// Use context storage
const queue = context.getStorage('events') || [];
queue.push(data);
context.setStorage('events', queue);

// Access plugins
const protobuf = plugins.plugins.protobuf;

msg.payload = processed;
return msg;
🐍

Python Functions

Full Python ecosystem support

# Access message payload
data = msg['payload']

# Read from storage (read-only)
queue = storage.get('events', [])

# Access environment variables
api_key = env.get('API_KEY')

# Process data
processed = process_data(data)
msg['payload'] = processed

đŸ“Ļ Package Management

Easily add npm packages or Python libraries to extend your function nodes with powerful capabilities.

đŸŸĸ

JavaScript / npm

Install packages in the server's node_modules directory. Then use require() in your function code.

# Install package
npm
install lodash

# Use in function node
const _ = require('lodash');
const result = _.chunk(msg.payload, 2);
🐍

Python / pip

Add libraries to requirements.txt and rebuild the Docker image. Or specify allowed modules per function node.

# Add to requirements.txt
pandas==2.0.3
requests==2.31.0

# Use in function node
import pandas as pd
df = pd.DataFrame(msg['payload'])
💡
Quick & Easy
Install npm packages with a single command, or add Python libraries to requirements.txt. No complex configuration needed. Packages are immediately available in your function nodes.

🔐 Variables & Storage

FlowEngine provides three powerful mechanisms for managing data and configuration across your flows.

🌍

Environment Variables

Per-flow configuration stored in the database. Perfect for API keys, URLs, and configuration values. Set via Flow Settings → Env Vars.

// JavaScript
const
apiKey = context.getEnv('API_KEY');
const
baseUrl = context.env.BASE_URL;

// Template/Transform
{{env.API_KEY}}

# Python
api_key = env.get('API_KEY')
💾 Persisted in database â€ĸ Read-only during execution
💾

Flow Storage

Per-flow shared state for queues, buffers, and runtime data. All nodes in a flow can read/write. Perfect for producer/consumer patterns.

// Get/Set storage
const queue = context.getStorage('queue', []);
queue.push(msg.payload);
context.setStorage('queue', queue);

// Clear storage
context.clearStorage('queue');
⚡ In-memory â€ĸ Read/Write â€ĸ Cleared on redeploy
🌐

Global Variables

Shared across all nodes in a flow. Use Flow Storage as global variables for counters, flags, and shared state between unconnected flow sections.

// Counter example
const count = context.getStorage('counter', 0);
context.setStorage('counter', count + 1);

// State machine
context.setStorage('state', 'processing');
🔄 Shared state â€ĸ Queue patterns â€ĸ State machines
💡 Use Cases
  • Env Vars: API keys, base URLs, timeouts, feature flags
  • Flow Storage: Queues, buffers, batch processing
  • Global Vars: Counters, state machines, shared flags
âš™ī¸ Quick Setup
Env Vars: Flow Settings → Environment Variables
Storage: Use context.getStorage() / setStorage() in function nodes
Access: Available in all nodes via context

Subflows: Reusable Flow Components

Build once, use everywhere. Create reusable subflows that can be shared across all flows and projects

🔄 What are Subflows?

Subflows are global, reusable flow components that can be used across multiple flows and projects. They allow you to encapsulate common logic and reuse it anywhere.

🌐
Global & Reusable
Not tied to any project. Use in any flow, including chain flows and other subflows.
🔗
Nested Support
Subflows can use other subflows, enabling complex modular architectures.
⚡
Runtime Inheritance
Subflows run in the parent flow's runtime environment (worker or docker).
Data Flow Pattern
Parent Flow
→
subflow:instance
→
subflow:in
→
Processing Nodes
→
subflow:out
→
subflow:instance
→
Parent Flow

📝 Creating a Subflow

1 Required Entry Node
Add a subflow:in node. This is the ONLY entry point.
2 Add Processing Logic
Add any nodes (functions, transforms, etc.) to process msg.payload.
3 Required Exit Node
Add a subflow:out node. This sends data back to the parent flow.

🔌 Using a Subflow

1 Add Instance Node
Drag a subflow:instance node to your flow.
2 Configure Subflow ID
Set the subflowId to reference your subflow.
3 Connect & Use
Connect nodes to the instance. msg.payload flows through automatically.

Common Use Cases

🔄
Data Transformation
Reusable transformation logic for common data formats (JSON, CSV, XML).
✅
Validation
Standard validation rules that can be applied across multiple flows.
📊
Logging & Monitoring
Consistent logging patterns and metrics collection across flows.
🔐
Authentication
Reusable authentication and authorization logic for API calls.
📧
Notification
Standard notification templates for alerts, emails, or webhooks.
🔄
Error Handling
Consistent error handling and retry logic across all flows.

Visual Flow Examples

See how flows look in the React Flow editor

MQTT to Database

📡
MQTT In
⚡
Function
💾
DB Insert

HTTP API with Transform

🌐
HTTP Trigger
🔄
Transform
📤
Response

Subflow Integration

Inject
Subflow
Subflow
Bulk Write

Container Access & Network Configuration

Direct bash access to running containers and advanced Docker network configuration

đŸ’ģ Bash Terminal Access

Access running flow containers and the FlowEngine server directly from the UI. Execute commands, inspect logs, and debug issues in real-time.

# Access FlowEngine Server
$
docker exec -it flowengine-server-dev bash

# Access Running Flow Container
$
docker exec -it flowengine-default-flow-123 bash

# Check container status
$
ps aux | grep node
$
netstat -tulpn
🔍
Container Inspection
View container info, resource usage, and network configuration
⚡
Command Execution
Run Linux commands with 30-second timeout protection
📋
Command History
Navigate previous commands with arrow keys

🌐 Docker Network Configuration

Advanced network configuration for Docker runtime flows. Connect containers to custom networks for isolation or multi-network access.

// Flow Runtime Configuration
{
mode
: "docker",
networks
: {
primary
: "custom-network", // Optional
additional
: ["monitoring-net", "db-net"]
}
}
🔗
Primary Network
Auto-detects server network by default, or specify custom network
➕
Additional Networks
Connect to multiple networks for cross-network communication
đŸ›Ąī¸
Network Isolation
Isolate flows on separate networks for security and resource management

Live Logs & Real-Time Debugging

Flow-specific debugging with live logs and debug panel in the flow editor

📜

Live Logs

Real-time flow execution logs

[2024-12-10 11:14:15]
INFO Flow started: test-flow-worker
[2024-12-10 11:14:15]
INFO Node: inject-1 executed
[2024-12-10 11:14:15]
DEBUG Payload: {"test": "value"}
[2024-12-10 11:14:16]
INFO Node: function-1 processing...
[2024-12-10 11:14:16]
INFO Node: function-1 completed
[2024-12-10 11:14:16]
DEBUG Subflow: increment-payload executed
[2024-12-10 11:14:17]
INFO Flow completed successfully
✨ Flow-Specific Filtering
Only see logs from the currently open flow. No noise from other flows.
🐛

Debug Panel

Interactive debug sidebar

Flow debug-before-bulk 2m ago
Debug Before Bulk Write
{
payload: [100 items],
_msgid: "abc123",
meta: {...}
}
đŸŽ¯ Per-Flow Debug View
Debug messages automatically filtered by flow. Subflow debug messages tagged and color-coded.

See FlowEngine in Action

Explore the visual flow editor, analytics dashboard, and powerful features

â›“ī¸

Chain Flows: Sequential Flow Execution

Link multiple flows together in a sequential chain. Automatic state management, resource balancing, and seamless payload passing

🔗 What are Chain Flows?

Chain flows allow you to link multiple flows together in a sequential execution chain. Each flow runs independently with its own runtime, and data flows seamlessly between them.

⚡
Per-Flow Runtime Lifecycle
Each flow starts its own runtime (worker/docker), executes, then shuts down. Next flow starts fresh.
đŸ“Ļ
Automatic Payload Passing
msg.payload automatically flows from one flow to the next via chain state storage.
🔄
Resource Balancing
No resource conflicts. Each flow gets clean state and isolated resources.
Chain Execution Flow
Flow 1
→
Runtime Starts
→
Flow 2
→
Payload Passed
→
Flow 3
Chain Complete
â–ļī¸

chain:start

First flow only. Replaces entry nodes. Initializes chain state and receives trigger payload.

Required in first flow
â­ī¸

chain:next

Middle flows. Ends current flow, stores payload, starts next flow. Current flow runtime stops.

Required in middle flows
âšī¸

chain:end

Last flow only. Marks chain as completed, saves execution history, stops final runtime.

Required in last flow

How Chain Flows Work

📋 Setup Process

1 Create Chain Flow Project
Enable chain flows when creating a project. Set default runtime (worker/docker).
2 Create Chain Flows
Create multiple flows in the chain project. Each flow automatically becomes a chain flow.
3 Configure Chain Order
Use Chain Manager to set flow order. System automatically configures chain nodes.
4 Add Chain Nodes
First flow: chain:start. Middle flows: chain:next. Last flow: chain:end.

âš™ī¸ Execution Process

1 Chain Starts
First flow runtime starts. chain:start node initializes chain state and receives payload.
2 Flow Executes
Flow processes payload through its nodes. When chain:next is reached, payload is stored.
3 Runtime Transition
Current flow runtime stops. Next flow runtime starts. Payload retrieved from storage.
4 Chain Completes
Last flow reaches chain:end. Execution history saved. Chain marked as completed.

✨ Key Benefits

🔒
Resource Isolation
Each flow runs in its own runtime. No resource conflicts or state leakage.
📊
Execution Tracking
Track individual flow execution times and total chain cycle time.
🔄
Automatic State Management
Chain state, payload storage, and execution history handled automatically.
⚡
Scalable Architecture
Handle complex multi-stage workflows with clean separation of concerns.

đŸŽ¯ Use Cases

đŸ“Ĩ Multi-Stage Data Processing
Stage 1: Data ingestion → Stage 2: Transformation → Stage 3: Storage → Stage 4: Notification
🔄 ETL Pipelines
Extract from source → Transform data → Load into destination. Each stage isolated.
🔐 Workflow Orchestration
Authentication → Authorization → Processing → Audit Logging. Sequential security checks.
📊 Analytics Pipelines
Data collection → Aggregation → Analysis → Reporting. Each stage can scale independently.

Built With Modern Technology

Leveraging the best tools and frameworks for reliability and performance

âš›ī¸
React
đŸŸĸ
Node.js
đŸŗ
Docker
📊
MongoDB
🔌
Socket.IO
🎨
React Flow
⚡
Express
🐍
Python
đŸ“Ļ
Protobuf
đŸŽ¯
TypeScript
💨
Vite
🎨
Tailwind CSS
📡
MQTT
🔄
Kafka
💾
Redis

Why FlowEngine is Better

Compare FlowEngine to traditional integration platforms

Feature Traditional Platforms FlowEngine
Customization Limited, vendor-locked ✓ Fully customizable, in-house
Runtime Options Single runtime ✓ Worker Thread + Docker
Hot Redeploy Requires restart ✓ Zero-downtime updates
Real-Time Debugging Limited or none ✓ Live debug panel & logs
Subflows Basic support ✓ Global reusable subflows
Flow Chaining Not available ✓ Advanced chain flows
Analytics Basic metrics ✓ Comprehensive analytics dashboard
Cost High licensing fees ✓ Self-hosted, no per-user fees
Extensibility Plugin marketplace only ✓ Add custom nodes easily
Python Support Limited or none ✓ Full Python execution

Enterprise Use Cases

FlowEngine powers integrations across industries

🏭

IoT Data Processing

Connect MQTT devices, process sensor data, store in databases, and trigger alerts. Real-time IoT pipeline management.

📊

Data Integration

ETL pipelines, data transformation, bulk imports, and multi-database synchronization. Handle millions of records efficiently.

🌐

API Orchestration

Build REST APIs, aggregate multiple services, handle webhooks, and create microservice orchestrations.

📧

Event-Driven Systems

Kafka/Redis event processing, message routing, pub/sub patterns, and real-time event transformations.

📁

File Processing

SFTP file monitoring, CSV/JSON parsing, batch processing, and automated file workflows.

🔄

Workflow Automation

Complex multi-step workflows, conditional routing, error handling, retries, and state management.

Coming Soon

Continuous innovation and feature development

Advanced Analytics Dashboard

Enhanced analytics with trend analysis, predictive metrics, custom reporting, and data visualization

Flow Versioning & Rollback

Version control for flows with diff visualization, one-click rollback, and change history tracking

Enhanced Plugin System

Expanded plugin registry with community contributions, custom node marketplace, and plugin versioning

Flow Templates & Marketplace

Pre-built flow templates for common integration patterns, industry-specific solutions, and template sharing

Multi-Tenant Support

Enterprise multi-tenancy with isolated environments, resource quotas, and tenant-level access control

Advanced Security & Compliance

RBAC, audit logging, encryption at rest, SOC 2 compliance, and enterprise security features

Flow Export/Import

Export flows as packages, import into other environments, and share flows across teams

AI-Powered Flow Generation

AI-assisted flow creation, automatic optimization suggestions, and intelligent error detection

Built In-House, Fully Customizable

Complete control over every aspect of the platform

đŸŽ¯

Custom Node Development

Easily add new node types. Extend BaseNode class and register in the node registry. Full access to the execution engine.

🔌

Plugin Architecture

Plugin system for Python utilities, Protobuf support, and custom libraries. Extend functionality without modifying core.

🎨

UI Customization

React-based UI means complete control over the interface. Customize themes, layouts, and user experience.

âš™ī¸

Runtime Extensions

Add custom runtime adapters. Support new execution environments or extend existing ones.

đŸ“Ļ

No Vendor Lock-in

Own your code, own your data. No proprietary formats or hidden dependencies. Full source code access.

🚀

Rapid Development

Modular architecture makes it easy to add features. Well-documented codebase and clear extension points.

Ready to Transform Your Integrations?

FlowEngine gives you the power to build enterprise-grade integrations faster, with complete control and zero vendor lock-in.

Get Started View Technology
🤖

Model Context Protocol (MCP) Server

AI-Powered Rapid Flow Development. Create, update, test, and deploy flows instantly with AI assistance

🧠

AI-Assisted Development

The FlowEngine MCP Server enables AI assistants (like Cursor) to understand your flow architecture, generate flows following best practices, and manage your entire FlowEngine setup programmatically.

📚
Full Context Awareness
AI understands your node registry, connections, projects, and best practices
⚡
Instant Flow Generation
Describe what you need, AI creates the flow with proper structure and validation
🔄
Rapid Iteration
Update, test, and deploy flows in seconds without leaving your IDE
đŸ› ī¸

MCP Tools & Resources

📋
Flow CRUD
Create, Read, Update, Delete
🚀
Deploy & Test
Instant deployment
📊
Monitoring
Flow metrics & logs
â›“ī¸
Chain Flows
Flow chaining
💡 Example Workflow
# Ask AI in Cursor:
"Create a flow that listens to MQTT
topic 'sensors/temp', transforms data,
and stores in MongoDB"
→ AI generates flow with validation
→ Deploys automatically
→ Ready to test in seconds

Quick Setup in Cursor IDE

1
âš™ī¸
Configure MCP
Add FlowEngine MCP server to Cursor settings
2
🔄
Restart Cursor
MCP server loads automatically
3
đŸ’Ŧ
Start Creating
Ask AI to create, update, or deploy flows
🔮

Future Vision

Upcoming features and enhancements planned for FlowEngine

💡
đŸŗ

Project Container Isolation

Deploy entire projects in their own dedicated containers. Each flow within the project runs as separate threads inside the container, providing enhanced isolation and resource management at the project level.

💡
🔌

Additional Messaging & Integration

Built-in support for additional messaging protocols and integration patterns. Expand connectivity options with native nodes for more enterprise communication standards and protocols.

💡
🔔

Per-Flow Monitoring Webhooks

Configure webhooks for individual flows to receive real-time notifications about flow execution, errors, performance metrics, and state changes. Integrate with external monitoring systems.

💡
🤖

AI-Powered Monitoring

AI monitoring analyzes metric trends to detect fluctuations and predict potential failures before they occur. Proactive alerts based on pattern recognition and anomaly detection in performance metrics.

💡
đŸŽ¯

Primary Integration Application

Position FlowEngine as the primary integration application for enterprise environments. Comprehensive integration hub connecting all systems, services, and data sources in a unified, manageable platform.